Should a Driverless Car Decide Who Lives or Dies?

The industry is promising a glittering future of autonomous vehicles moving in harmony like schools of fish. That can’t happen, however, until carmakers answer the kinds of thorny philosophical questions explored in science fiction since Isaac Asimov wrote his robot series last century. For example, should an autonomous vehicle sacrifice its occupant by swerving off a cliff to avoid killing a school bus full of children?

Auto executives, finding themselves in unfamiliar territory, have enlisted ethicists and philosophers to help them navigate the shades of gray. Ford, General Motors, Audi, Renault and Toyota are all beating a path to Stanford University’s Center for Automotive Research, which is programming cars to make ethical decisions and see what happens.

“This issue is definitely in the crosshairs,” says Chris Gerdes, who runs the lab and recently met with the chief executives of Ford and GM to discuss the topic. “They’re very aware of the issues and the challenges because their programmers are actively trying to make these decisions today.”

[…]

That’s why we shouldn’t leave those decisions up to robots, says Wendell Wallach, author of “A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control.”

“The way forward is to create an absolute principle that machines do not make life and death decisions,” says Wallach, a scholar at the Interdisciplinary Center for Bioethics at Yale University. “There has to be a human in the loop. You end up with a pretty lawless society if people think they won’t be held responsible for the actions they take.”

Ref: Should a Driverless Car Decide Who Lives or Dies? – Bloomsberg