Last month, as one of Google’s self-driving cars approached a crosswalk, it did what it was supposed to do when it slowed to allow a pedestrian to cross, prompting its “safety driver” to apply the brakes. The pedestrian was fine, but not so much Google’s car, which was hit from behind by a human-driven sedan.
Google’s fleet of autonomous test cars is programmed to follow the letter of the law. But it can be tough to get around if you are a stickler for the rules. One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go. The human drivers kept inching forward, looking for the advantage — paralyzing Google’s robot.
t is not just a Google issue. Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book. “The real problem is that the car is too safe,” said Donald Norman, director of the Design Lab at the University of California, San Diego, who studies autonomous vehicles.
“They have to learn to be aggressive in the right amount, and the right amount depends on the culture.”
Dmitri Dolgov, head of software for Google’s Self-Driving Car Project, said that one thing he had learned from the project was that human drivers needed to be “less idiotic.”
In an apparent move to feed its smart-hardware ambitions, Google has bought an artificial intelligence startup, DeepMind, for somewhere in the ballpark of $500 million. Considering all of the data Google sifts through, and the fact that it might be getting into robotics, it’s not completely absurd that they’d want some software to give a robotic helping hand. (Facebook apparently wanted the company, too, and they’ve already made moves to wrangle their own sprawling web of information.) But the other part of this story is a little stranger: the deal reportedly came under the condition that Google create an “ethics board” for the project.
Google has set up an ethics board to oversee its work in artificial intelligence. The search giant has recently bought several robotics companies, along with Deep Mind, a British firm creating software that tries to help computers think like humans. One of its founders warned artificial intelligence is ‘number 1 risk for this century,’ and believes it could play a part in human extinction.
The ethics board, revealed by web site The Information, is to ensure the projects are not abused.
‘Google has agreed to establish an ethics board to ensure the artificial intelligence technology isn’t abused, according to two people familiar with the deal,’ said The Information, which revealed the news. The DeepMind-Google ethics board is set to create a series of rules and restrictions over the use of the technology.