Why Self-Driving Cars Must Be Programmed to Kill

And that raises some difficult issues. How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random? (See also “How to Help Self-Driving Cars Make Ethical Decisions.”)

The answers to these ethical questions are important because they could have a big impact on the way self-driving cars are accepted in society. Who would buy a car programmed to sacrifice the owner?

So can science help? Today, we get an answer of sorts thanks to the work of Jean-Francois Bonnefon at the Toulouse School of Economics in France and a couple of pals. These guys say that even though there is no right or wrong answer to these questions, public opinion will play a strong role in how, or even whether, self-driving cars become widely accepted.

So they set out to discover the public’s opinion using the new science of experimental ethics. This involves posing ethical dilemmas to a large number of people to see how they respond. And the results make for interesting, if somewhat predictable, reading. “Our results provide but a first foray into the thorny issues raised by moral algorithms for autonomous vehicles,” they say.

Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?

One way to approach this kind of problem is to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing 10.

But that approach may have other consequences. If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation.

[…]

So these guys posed these kinds of ethical dilemmas to several hundred workers on Amazon’s Mechanical Turk to find out what they thought. The participants were given scenarios in which one or more pedestrians could be saved if a car were to swerve into a barrier, killing its occupant or a pedestrian.

At the same time, the researchers varied some of the details such as the actual number of pedestrians that could be saved, whether the driver or an on-board computer made the decision to swerve and whether the participants were asked to imagine themselves as the occupant or an anonymous person.

The results are interesting, if predictable. In general, people are comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll.

This utilitarian approach is certainly laudable but the participants were willing to go only so far. “[Participants] were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves,” conclude Bonnefon and co.

 

Ref: Why Self-Driving Cars Must Be Programmed to Kill – MIT Technology Review

Obviously Drivers Are Already Abusing Tesla’s Autopilot

Arriving in New York in record time, without being arrested or killed, is a personal victory for the drivers. More than that, though, it highlights how quickly and enthusiastically autonomous technology is likely to be adopted, and how tricky it may be to keep in check once drivers get their first taste of freedom behind the wheel.

[…]

Autopilot caused a few scares, Roy says, largely because the car was moving so quickly. “There were probably three or four moments where we were on autonomous mode at 90 miles an hour, and hands off the wheel,” and the road curved, Roy says. Where a trained driver would aim for the apex—the geometric center of the turn—to maintain speed and control, the car follows the lane lines. “If I hadn’t had my hands there, ready to take over, the car would have gone off the road and killed us.” He’s not annoyed by this, though. “That’s my fault for setting a speed faster than the system’s capable of compensating.”

If someone causes an accident by relying too heavily on Tesla’s system, Tesla may not get off the hook by saying, “Hey, we told ’em to be careful.”

 

Ref: Obviously Drivers Are Already Abusing Tesla’s Autopilot – Wired