Robot Fears: Hawking, Gates, Musk Say Bots Will Turn on Man
Bridgehampton School, which has an enrollment of 166, won a third-place trophy at the Robotics Competition Championship for high schools around the country in April. The robots were required to pick up objects and put them in bins in the shortest amount of time.
* * *
MIT has built a robotic cheetah that can jump obstacles. And it can do it on its own, without anyone prompting it. It was demonstrated at the Robotics Challenge Convention in Pomona, California last month.
The cheetah first appeared on the scene at the MIT campus in Cambridge last year, showing off its ability to scamper around joyfully on the front lawn of one of the science buildings there. No wires were attached to it. It went directly from standing still to full gallop, ignoring trot and cantor. It was what cheetahs can do. The robot was not as fast as an actual cheetah. An actual cheetah can get up to about 75 miles an hour. This one got up to about 15 miles an hour.
In that original tryout last year, which is watchable on YouTube (see below), the cheetah operates on command to start running and it will continue to run until told to cease operations. It’s made of shiny metal and wires and batteries but otherwise looks like a cheetah, but without eyes or even a head, though the front end sticks out a little more than the rear.
Over last semester, the MIT crew worked further on the cheetah, got it to go as fast as 35 miles an hour, and gave it the equivalent of eyes. It sends out radiowaves, which bounce back from anything that’s in the way. The radio transmissions allow it to see the height and width of an obstacle, how far in front it is and how soon it will take to get there. It then does quick mathematical calculations about how many feet before it should start its jump, how high it should soar (it can jump a foot and a half) and how it should land, and it transmits this information to the different parts of its body, preparing itself to spring and when. Then it does it. It looks exactly as a cheetah would look bounding across the Serengeti. And it lands handsomely and successfully on the other side. And there are no strings attached.
It’s the same sort of mathematics used in these new cars that park themselves into a parking space. Who needs humans? Some day soon, all cars will have this technology and there will be no accidents.
Watch the cheetah on YouTube as they carry it to a large treadmill, turn it on and get it up to a full run. Because the cheetah can see as far away as 10 yards, it can then see a student appear from behind a barrier carrying a foam couch cushion that gets placed on the end of the treadmill so it moves quickly toward the robot who then, as it arrives, jumps over it easily and correctly, for its continuing run further down the treadmill.
I’ve also have seen a video where the cushion is an obstacle in the way of a cheetah robot running through the grass, and he does the same thing.
The big thing, the scary thing, is that a real cheetah or a human can jump an obstacle like that, continue like that, but ultimately needs to slow down to a trot or walk. The robot cheetah goes on at full run like this indefinitely. And it raises all sorts of other possibilities. Chasing down robbers, for instance, down back alleys and over fences.
Three of the smartest people in the 21st century so far are Steven Hawking, Elon Musk and Bill Gates. All of them are fearful of the outcome of building smart robots.
Elon Musk, interviewed at MIT, said “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. [By building smart robots, we are] summoning the demon.”
Bill Gates, interviewed on Reddit, said, “I don’t understand why some people are not concerned [about what robots can do.]”
Steven Hawking told the BBC that “The development of full artificial intelligence could spell the end of the human race.”
Regarding the ability of a robot that can perfectly park your car, consider where this leads. It leads to a day where you will be arrested if you, a HUMAN, who can cause accidents—guilty of causing 30,000 deaths of humans a year in the old days—tries to drive a car. And that is where it will start. The robots will take over. And they will do away with humans as incompetent self-centered pests who care only about themselves.
One of the great writers of science fiction was Isaac Asimov, a New Yorker who was at the top of his powers during the 1950s. He wrote hundreds of books, but probably the two best are I, Robot, and The Caves of Steel. In both books, big corporations have built fully developed robots, which they sell to private citizens or to governments to enforce the laws and do things that humans cannot—at the command of the humans. People name their robots but act dismissively toward them.
“James, what are you doing here?”
“Mr. Hoskins would like to see you in his office.”
“Okay. Now scat.”
Asimov, having thought long and hard about the relationship between humans and robots, came up with three laws that all companies building robots have to install in the brains of robots to make sure they can be let loose in the world. He wrote this in 1950. It takes place in 2035. They are: 1. A robot can never harm a human, or through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings except where such orders would conflict with the first law. And 3. A robot, must protect its own existence, as long as such protection does not conflict with the first two laws.
Both books involve robots that humans believe have gotten out of hand because there is something in their wiring that is allowing them to ignore one of the three rules. In every case, it turns out, there is a human behind the problem. The robots are still doing what they are told.
So the worrisome thing, as perceived by Asimov, and by our three wise men today is based on the fact that humans are selfish, short-sighted and smart, and will succeed if not stopped in leading ourselves, those other creatures on the planet and our offspring, down the road to ruin.
I explained it to a friend this way: Humans, I said, given the choice between making a long-term effort not to leave an environmentally disastrous world for our grandchildren or eating, before you go to bed tonight, a Twinkie, will choose the Twinkie.
There may come a time soon, perhaps by 2035, when a robot will be smart enough to not accept that, and, as a result, will turn the tables and take over. Everything will then be right.
My guess is it will be built by a human who knows, just that once, exactly what he is doing.