Google’s Driverless Cars Run Into Problem: Cars With Drivers

Last month, as one of Google’s self-driving cars approached a crosswalk, it did what it was supposed to do when it slowed to allow a pedestrian to cross, prompting its “safety driver” to apply the brakes. The pedestrian was fine, but not so much Google’s car, which was hit from behind by a human-driven sedan.

Google’s fleet of autonomous test cars is programmed to follow the letter of the law. But it can be tough to get around if you are a stickler for the rules. One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go. The human drivers kept inching forward, looking for the advantage — paralyzing Google’s robot.

t is not just a Google issue. Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book. “The real problem is that the car is too safe,” said Donald Norman, director of the Design Lab at the University of California, San Diego, who studies autonomous vehicles.

“They have to learn to be aggressive in the right amount, and the right amount depends on the culture.”

[…]

Dmitri Dolgov, head of software for Google’s Self-Driving Car Project, said that one thing he had learned from the project was that human drivers needed to be “less idiotic.”

 

Ref: Google’s Driverless Cars Run Into Problem: Cars With Drivers – Times