Category Archives: T – ethics

How to Make Driverless Cars Behave

The Daimler and Benz foundation, for instance, is fundinga research project about how driverless cars will change society. Part of that project, led by California Polytechnic State University professor Patrick Lin, will be focused on ethics. Lin has arguably thought about the ethics of driverless cars more than anyone. He’s written about the topic for Wired and Forbes, and is currently on sabbatical working with the Center for Automotive Research at Stanford (CARS, of course), a group that partners with auto industry members on future technology.

Over the last year, Lin has been convincing the auto industry that it should be thinking about ethics, including briefings with Tesla Motors and auto supplier Bosch, and talks at Stanford with major industry players.

“I’ve been telling them that, at this very early stage, what’s important isn’t so much nailing down the right answers to difficult ethical dilemmas, but to raise awareness that ethics will matter much more as cars become more autonomous,” Lin wrote in an e-mail. “It’s about being thoughtful about certain decisions and able to defend them–in other words, it’s about showing your math.”

In a phone interview, Lin said that industry representatives often react to his talks with astonishment, as they realize driverless cars require ethical considerations.

Perhaps that explains why auto makers aren’t eager to have the discussion in public at the moment. BMW, Ford and Audi–who are each working on automated driving features in their cars–declined to comment for this story. Google also wouldn’t comment on the record, even as it prepares to test fully autonomous cars with no steering wheels. And the auto makers who did comment are focused on the idea that the first driverless cars won’t take ethics into account at all.

“The cars are designed to minimize the overall risk for a traffic accident,” Volvo spokeswoman Malin Persson said in an e-mail. “If the situation is unsure, the car is made to come to a safe stop.” (Volvo, by the way, says it wants to eliminate serious injuries or deaths in its cars by 2020, but research has shown that even driverless cars will inevitably crash.)

 

Ref: How to Make Driverless Cars Behave – Time

Importance of Trolley Dilemma and Similar Thought Experiments

While human drivers can only react instinctively in a sudden emergency, a robot car is driven by software, constantly scanning its environment with unblinking sensors and able to perform many calculations before we’re even aware of danger. They can make split-second choices to optimize crashes–that is, to minimize harm. But software needs to be programmed, and it is unclear how to do that for the hard cases.

In constructing the edge cases here, we are not trying to simulate actual conditions in the real world. These scenarios would be very rare, if realistic at all, but nonetheless they illuminate hidden or latent problems in normal cases. From the above scenario, we can see that crash-avoidance algorithms can be biased in troubling ways, and this is also at least a background concern any time we make a value judgment that one thing is better to sacrifice than another thing.

[…]

In future autonomous cars, crash-avoidance features alone won’t be enough. Sometimes an accident will be unavoidable as a matter of physics, for myriad reasons–such as insufficient time to press the brakes, technology errors, misaligned sensors, bad weather, and just pure bad luck. Therefore, robot cars will also need to have crash-optimization strategies.

To optimize crashes, programmers would need to design cost-functions–algorithms that assign and calculate the expected costs of various possible options, selecting the one with the lowest cost–that potentially determine who gets to live and who gets to die. And this is fundamentally an ethics problem, one that demands care and transparency in reasoning.

It doesn’t matter much that these are rare scenarios. Often, the rare scenarios are the most important ones, making for breathless headlines. In the U.S., a traffic fatality occurs about once every 100 million vehicle-miles traveled. That means you could drive for more than 100 lifetimes and never be involved in a fatal crash. Yet these rare events are exactly what we’re trying to avoid by developing autonomous cars, as Chris Gerdes at Stanford’s School of Engineering reminds us.

 

Ref: The Robot Car of Tomorrow May Just Be Programmed to Hit You – Wired

Can a Robot Learn Right from Wrong?

There is no right answer to the trolley hypothetical — and even if there was, many roboticists believe it would be impractical to predict each scenario and program what the robot should do.

“It’s almost impossible to devise a complex system of ‘if, then, else’ rules that cover all possible situations,” says Matthias Scheutz, a computer science professor at Tufts University. “That’s why this is such a hard problem. You cannot just list all the circumstances and all the actions.”

Instead, Scheutz is trying to design robot brains that can reason through a moral decision the way a human would. His team, which recently received a$7.5 million grant from the Office of Naval Research (ONR), is planning an in-depth survey to analyze what people think about when they make a moral choice. The researchers will then attempt to simulate that reasoning in a robot.

At the end of the five-year project, the scientists must present a demonstration of a robot making a moral decision. One example would be a robot medic that has been ordered to deliver emergency supplies to a hospital in order to save lives. On the way, it meets a soldier who has been badly injured. Should the robot abort the mission and help the soldier?

For Scheutz’s project, the decision the robot makes matters less than the fact that it can make a moral decision and give a coherent reason why — weighing relevant factors, coming to a decision, and explaining that decision after the fact. “The robots we are seeing out there are getting more and more complex, more and more sophisticated, and more and more autonomous,” he says. “It’s very important for us to get started on it. We definitely don’t want a future society where these robots are not sensitive to these moral conflicts.”

[…]

For the ONR grant, Arkin and his team proposed a new approach. Instead of using a rule-based system like the ethical governor or a “folk psychology” approach like Scheutz’s, Arkin’s team wants to study moral development in infants. Those lessons would be integrated into the Soar architecture, a popular cognitive system for robots that employs both problem-solving and overarching goals.

 

Ref: Can a Robot Learn Right from Wrong? – TheVerge

Now The Military Is Going To Build Robots That Have Morals

Are robots capable of moral or ethical reasoning? It’s no longer just a question for tenured philosophy professors or Hollywood directors. This week, it’s a question being put to the United Nations.

The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.

[…]

“Even if such systems aren’t armed, they may still be forced to make moral decisions,” Bello said. For instance, in a disaster scenario, a robot may be forced to make a choice about whom to evacuate or treat first, a situation where a bot might use some sense of ethical or moral reasoning. “While the kinds of systems we envision have much broader use in first-response, search-and-rescue and in the medical domain, we can’t take the idea of in-theater robots completely off the table,” Bello said.

Some members of the artificial intelligence, or AI, research and machine ethics communities were quick to applaud the grant. “With drones, missile defines, autonomous vehicles, etc., the military is rapidly creating systems that will need to make moral decisions,” AI researcher Steven Omohundrotold Defense One. “Human lives and property rest on the outcomes of these decisions and so it is critical that they be made carefully and with full knowledge of the capabilities and limitations of the systems involved. The military has always had to define ‘the rules of war’ and this technology is likely to increase the stakes for that.”

[…]

“This is a significantly difficult problem and it’s not clear we have an answer to it,” said Wallach. “Robots both domestic and militarily are going to find themselves in situations where there are a number of courses of actions and they are going to need to bring some kinds of ethical routines to bear on determining the most ethical course of action. If we’re moving down this road of increasing autonomy in robotics, and that’s the same as Google cars as it is for military robots, we should begin now to do the research to how far can we get in ensuring the robot systems are safe and can make appropriate decisions in the context they operate.

 

Ref: Now The Military Is Going To Build Robots That Have Morals – DefenseOne

If Death by Autonomous Car is Unavoidable, Who Should Die? POLL

The Tunnel Problem: You are travelling along a single lane mountain road in an autonomous car that is approaching a narrow tunnel. You are the only passenger of the car. Just before entering the tunnel a child attempts to run across the road but trips in the center of the lane, effectively blocking the entrance to the tunnel. The car has only two options: continue straight, thereby hitting and killing the child, or swerve, thereby colliding into the wall on either side of the tunnel and killing you.

If you find yourself as the passenger of the tunnel problem described above, how should the car react?

 

How hard was it for you to answer the Tunnel Problem question?

 

Who should determine how the car responds?

 

Should we be surprised by these results? Not really. The tunnel problem poses a deeply moral question, one that has no right answer. In such cases an individual’s deep moral commitments could make the difference between going straight or swerving.

According to philosophers like Bernard Williams, our moral commitments should sometimes trump other ethical considerations even if that leads to counterintuitive outcomes, like sacrificing the many to save the few. In the tunnel problem, arbitrarily denying individuals their moral preferences, by hard-coding a decision into the car, runs the risk of alienating them from their convictions. That is definitely not fantastic.

In healthcare, when moral choices must be made it is standard practice for nurses and physicians to inform patients of their reasonable treatment options, and let patients make informed decisions that align with personal preferences. This process of informed consent is based on the idea that individuals have the right to make decisions about their own bodies. Informed consent is ethically and legally entrenched in healthcare, such that failing to obtain informed consent exposes a healthcare professional to claims of professional negligence.

Informed consent wasn’t always the standard of practice in healthcare. It used to be common for physicians to make important treatment decisions on behalf of patients, often actively deceiving them as part of a treatment plan.

 

 

Ref: If death by autonomous car is unavoidable, who should die? Reader poll results – RoboHub
Ref: You Should Have a Say in Your Robot Car’s Code of Ethics – Wired

 

Robot Cars With Adjustable Ethics Settings

So why not let the user select the car’s “ethics setting”? The way this would work is one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible.

Plus, with an adjustable ethics dial set by the customer, the manufacturer presumably can’t be blamed for hard judgment calls, especially in no-win scenarios, right? In one survey, 44 percent of the respondents preferred to have a personalized ethics setting, while only 12 percent thought the manufacturer should predetermine the ethical standard. So why not give customers what they want?

[…]

So, an ethics setting is not a quick workaround to the difficult moral dilemma presented by robotic cars. Other possible solutions to consider include limiting manufacturer liability by law, similar to legal protections for vaccine makers, since immunizations are essential for a healthy society, too. Or if industry is unwilling or unable to develop ethics standards, regulatory agencies could step in to do the job—but industry should want to try first.

With robot cars, we’re trying to design for random events that previously had no design, and that takes us into surreal territory. Like Alice’s wonderland, we don’t know which way is up or down, right or wrong. But our technologies are powerful: they give us increasing omniscience and control to bring order to the chaos. When we introduce control to what used to be only instinctive or random—when we put God in the machine—we create new responsibility for ourselves to get it right.

 

Ref: Here’s a Terrible Idea: Robot Cars With Adjustable Ethics Settings – Wired

Facebook’s Massive-Scale Emotional Contagion Experiment

Facebook researchers have published a paper documenting a huge social experiment carried out on 689,003 users without their knowledge. The experiment was to prove that emotional states can be transferred to others via emotional contagion. They proved this by manipulating different user’s newsfeed to be more positive or more negative and then measuring the emotional state of the user afterwards by analysing their subsequent status updates.

we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred.

They demonstrate how influential the newsfeed algorithm can be in manipulating a person’s mood, and even test tweaking the algorithm to deliver more emotional content with hope that it would be more engaging.

 

Ref: Facebook’s massive-scale emotional contagion experiment – Algopop

Now The Military Is Going To Build Robots That Have Morals

The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.

“Even though today’s unmanned systems are ‘dumb’ in comparison to a human counterpart, strides are being made quickly to incorporate more automation at a faster pace than we’ve seen before,” Paul Bello, director of the cognitive science program at the Office of Naval Research told Defense One. “For example, Google’s self-driving cars are legal and in-use in several states at this point. As researchers, we are playing catch-up trying to figure out the ethical and legal implications. We do not want to be caught similarly flat-footed in any kind of military domain where lives are at stake.”

“Even if such systems aren’t armed, they may still be forced to make moral decisions,” Bello said. For instance, in a disaster scenario, a robot may be forced to make a choice about whom to evacuate or treat first, a situation where a bot might use some sense of ethical or moral reasoning. “While the kinds of systems we envision have much broader use in first-response, search-and-rescue and in the medical domain, we can’t take the idea of in-theater robots completely off the table,” Bello said.

Some members of the artificial intelligence, or AI, research and machine ethics communities were quick to applaud the grant. “With drones, missile defines, autonomous vehicles, etc., the military is rapidly creating systems that will need to make moral decisions,” AI researcher Steven Omohundrotold Defense One. “Human lives and property rest on the outcomes of these decisions and so it is critical that they be made carefully and with full knowledge of the capabilities and limitations of the systems involved. The military has always had to define ‘the rules of war’ and this technology is likely to increase the stakes for that.”

 

Ref: Now The Military Is Going To Build Robots That Have Morals – DefenseOne