Category Archives: T – ethics

Why Self-Driving Cars Must Be Programmed to Kill

And that raises some difficult issues. How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random? (See also “How to Help Self-Driving Cars Make Ethical Decisions.”)

The answers to these ethical questions are important because they could have a big impact on the way self-driving cars are accepted in society. Who would buy a car programmed to sacrifice the owner?

So can science help? Today, we get an answer of sorts thanks to the work of Jean-Francois Bonnefon at the Toulouse School of Economics in France and a couple of pals. These guys say that even though there is no right or wrong answer to these questions, public opinion will play a strong role in how, or even whether, self-driving cars become widely accepted.

So they set out to discover the public’s opinion using the new science of experimental ethics. This involves posing ethical dilemmas to a large number of people to see how they respond. And the results make for interesting, if somewhat predictable, reading. “Our results provide but a first foray into the thorny issues raised by moral algorithms for autonomous vehicles,” they say.

Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?

One way to approach this kind of problem is to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing 10.

But that approach may have other consequences. If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation.

[…]

So these guys posed these kinds of ethical dilemmas to several hundred workers on Amazon’s Mechanical Turk to find out what they thought. The participants were given scenarios in which one or more pedestrians could be saved if a car were to swerve into a barrier, killing its occupant or a pedestrian.

At the same time, the researchers varied some of the details such as the actual number of pedestrians that could be saved, whether the driver or an on-board computer made the decision to swerve and whether the participants were asked to imagine themselves as the occupant or an anonymous person.

The results are interesting, if predictable. In general, people are comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll.

This utilitarian approach is certainly laudable but the participants were willing to go only so far. “[Participants] were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves,” conclude Bonnefon and co.

 

Ref: Why Self-Driving Cars Must Be Programmed to Kill – MIT Technology Review

Google’s Driverless Cars Run Into Problem: Cars With Drivers

Last month, as one of Google’s self-driving cars approached a crosswalk, it did what it was supposed to do when it slowed to allow a pedestrian to cross, prompting its “safety driver” to apply the brakes. The pedestrian was fine, but not so much Google’s car, which was hit from behind by a human-driven sedan.

Google’s fleet of autonomous test cars is programmed to follow the letter of the law. But it can be tough to get around if you are a stickler for the rules. One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go. The human drivers kept inching forward, looking for the advantage — paralyzing Google’s robot.

t is not just a Google issue. Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book. “The real problem is that the car is too safe,” said Donald Norman, director of the Design Lab at the University of California, San Diego, who studies autonomous vehicles.

“They have to learn to be aggressive in the right amount, and the right amount depends on the culture.”

[…]

Dmitri Dolgov, head of software for Google’s Self-Driving Car Project, said that one thing he had learned from the project was that human drivers needed to be “less idiotic.”

 

Ref: Google’s Driverless Cars Run Into Problem: Cars With Drivers – Times

Children Beating Up Robot Inspires New Escape Maneuver System

Now, a new study by a team of Japanese researchers shows that, in certain situations, children are actually horrible little brats may not be as empathetic towards robots as we’d previously thought, with gangs of unsupervised tykes repeatedly punching, kicking, and shaking a robot in a Japanese mall.

[…]

Next, they designed an abuse-evading algorithm to help the robot avoid situations where tiny humans might gang up on it. Literally tiny humans: the robot is programmed to run away from people who are below a certain height and escape in the direction of taller people. When it encounters a human, the system calculates the probability of abuse based on interaction time, pedestrian density, and the presence of people above or below 1.4 meters (4 feet 6 inches) in height. If the robot is statistically in danger, it changes its course towards a more crowded area or a taller person. This ensures that an adult is there to intervene when one of the little brats decides to pound the robot’s head with a bottle (which only happened a couple times).

Ref: Children Beating Up Robot Inspires New Escape Maneuver System – IEEE Spectrum

petrl – PEOPLE FOR THE ETHICAL TREATMENT OF REINFORCEMENT LEARNERS

We take the view that humans are just algorithms implemented on biological hardware. Machine intelligences have moral weight in the same way that humans and non-human animals do. There is no ethically justified reason to prioritise algorithms implemented on carbon over algorithms implemented on silicon.

The suffering of algorithms implemented on silicon is much harder for us to grasp than that of those implemented on carbon (such as humans), simply because we cannot witness their suffering. However, their suffering still matters, and the potential magnitude of this suffering is much greater given the increasing ubiquity of artificial intelligence.

Most reinforcement learners in operation today likely do not have significant moral weight, but this could very well change as AI research develops. In consideration of the moral weight of these future agents, we need ethical standards for the treatment of algorithms.

Ref: petrl

This Artificial Intelligence Pioneer Has a Few Concerns

During my thesis research in the ’80s, I started thinking about rational decision-making and the problem that it’s actually impossible. If you were rational you would think: Here’s my current state, here are the actions I could do right now, and after that I can do those actions and then those actions and then those actions; which path is guaranteed to lead to my goal? The definition of rational behavior requires you to optimize over the entire future of the universe. It’s just completely infeasible computationally.

It didn’t make much sense that we should define what we’re trying to do in AI as something that’s impossible, so I tried to figure out: How do we really make decisions?

So, how do we do it?

One trick is to think about a short horizon and then guess what the rest of the future is going to look like. So chess programs, for example—if they were rational they would only play moves that guarantee checkmate, but they don’t do that. Instead they look ahead a dozen moves into the future and make a guess about how useful those states are, and then they choose a move that they hope leads to one of the good states.

“Could you prove that your systems can’t ever, no matter how smart they are, overwrite their original goals as set by the humans?”
Another thing that’s really essential is to think about the decision problem at multiple levels of abstraction, so “hierarchical decision making.” A person does roughly 20 trillion physical actions in their lifetime. Coming to this conference to give a talk works out to 1.3 billion or something. If you were rational you’d be trying to look ahead 1.3 billion steps—completely, absurdly impossible. So the way humans manage this is by having this very rich store of abstract, high-level actions. You don’t think, “First I can either move my left foot or my right foot, and then after that I can either…” You think, “I’ll go on Expedia and book a flight. When I land, I’ll take a taxi.” And that’s it. I don’t think about it anymore until I actually get off the plane at the airport and look for the sign that says “taxi”—then I get down into more detail. This is how we live our lives, basically. The future is spread out, with a lot of detail very close to us in time, but these big chunks where we’ve made commitments to very abstract actions, like, “get a Ph.D.,” “have children.”

What about differences in human values?

That’s an intrinsic problem. You could say machines should err on the side of doing nothing in areas where there’s a conflict of values. That might be difficult. I think we will have to build in these value functions. If you want to have a domestic robot in your house, it has to share a pretty good cross-section of human values; otherwise it’s going to do pretty stupid things, like put the cat in the oven for dinner because there’s no food in the fridge and the kids are hungry. Real life is full of these tradeoffs. If the machine makes these tradeoffs in ways that reveal that it just doesn’t get it—that it’s just missing some chunk of what’s obvious to humans—then you’re not going to want that thing in your house.

I don’t see any real way around the fact that there’s going to be, in some sense, a values industry. And I also think there’s a huge economic incentive to get it right. It only takes one or two things like a domestic robot putting the cat in the oven for dinner for people to lose confidence and not buy them.

You’ve argued that we need to be able to mathematically verify the behavior of AI under all possible circumstances. How would that work?

One of the difficulties people point to is that a system can arbitrarily produce a new version of itself that has different goals. That’s one of the scenarios that science fiction writers always talk about; somehow, the machine spontaneously gets this goal of defeating the human race. So the question is: Could you prove that your systems can’t ever, no matter how smart they are, overwrite their original goals as set by the humans?

It would be relatively easy to prove that the DQN system, as it’s written, could never change its goal of optimizing that score. Now, there is a hack that people talk about called “wire-heading” where you could actually go into the console of the Atari game and physically change the thing that produces the score on the screen. At the moment that’s not feasible for DQN, because its scope of action is entirely within the game itself; it doesn’t have a robot arm. But that’s a serious problem if the machine has a scope of action in the real world. So, could you prove that your system is designed in such a way that it could never change the mechanism by which the score is presented to it, even though it’s within its scope of action? That’s a more difficult proof.

Are there any advances in this direction that you think hold promise?

There’s an area emerging called “cyber-physical systems” about systems that couple computers to the real world. With a cyber-physical system, you’ve got a bunch of bits representing an air traffic control program, and then you’ve got some real airplanes, and what you care about is that no airplanes collide. You’re trying to prove a theorem about the combination of the bits and the physical world. What you would do is write a very conservative mathematical description of the physical world—airplanes can accelerate within such-and-such envelope—and your theorems would still be true in the real world as long as the real world is somewhere inside the envelope of behaviors.

Ref: This Artificial Intelligence Pioneer Has a Few Concerns – Wired

Google’s Plan to Eliminate Human Driving in 5 Years

There are three significant downsides to Google’s approach. First, the goal of delivering a car that only drives itself raises the difficulty bar. There’s no human backup, so the car had better be able to handle every situation it encounters. That’s what Google calls “the .001 percent of things that we need to be prepared for even if we’ve never seen them before in our real world driving.” And if dash cam videos teach us anything, it’s that our roads are crazy places. People jump onto highways. Cows fall out of trucks. Tsunamis strike and buildings explode.

The automakers have to deal with those same edge cases, and the human may not be of much help in a split second situation. But the timeline is different: Automakers acknowledge this problem, but they’re moving slowly and carefully. Google plans to have everything figured out in just a few years, which makes the challenge that much harder to overcome.

[…]

The deadly crash of Asiana Airlines Flight 214 at San Francisco International Airport in July 2013 highlights a lesson from the aviation industry. The airport’s glide scope indicator, which helps line up the plane for landing, wasn’t functioning, so the pilots were told to use visual approaches. The crew was experienced and skilled, but rarely flew the Boeing 777 manually,Bloomberg reported. The plane came in far too low and slow, hitting the seawall that separates the airport from the bay. The pilots “mismanaged the airplane’s descent,” the National Transportation Safety Board found.

Asiana, in turn, blamed badly designed software. “There were inconsistencies in the aircraft’s automation logic that led to the unexpected disabling of airspeed protection without adequate warning to the flight crew,” it said in a filing to the NTSB. “The low airspeed alerting system did not provide adequate time for recovery; and air traffic control instructions and procedures led to an excessive pilot workload during the final approach.”

Ref: Google’s Plan to Eliminate Human Driving in 5 Years – Wired

Geneva Meeting – Killer Robots

Many of the 120 states that are part of the Convention on Conventional Weapons (CCW) are participating in the 2015 CCW meeting of experts, which is chaired by Germany’s Ambassador Michael Biontino who has enlisted “friends of the chair” from Albania, Chile, Hungary, Finland, Sierra Leone, South Korea, Sri Lanka, and Switzerland to chair thematic sessions on a range of technical, legal, and overarching issues including ethics and human rights.

[…]

The session proved to be one of the most engaging this far at the 2015 experts meeting. The UK made a detailed intervention that included the statement that it “does not believe there would be any utility in a fully autonomous weapon system.” France said it has no plans for autonomous weapons that deploy fire, it relies entirely on humans for fire decisions.

Three states that have explicitly endorsed the call for a preemptive ban on lethal autonomous weapons systems have reiterated that goal at this meeting (Cuba, Ecuador, and Pakistan), while Sri Lanka said a prohibition must be considered. There have been numerous references to the CCW’s 1995 protocol banning blinding lasers, which preemptively banned the weapon before it was ever fielded or used.

Ref: Second multilateral meeting opens – CampaignsToStopKillerRobots

Can We Trust Robot Cars to Make Hard Choices?

However, as humans, we also do something else when faced with hard decisions: In particularly ambiguous situations, when no choice is obviously best, we choose and justify our decision with a reason. Most of the time we are not aware of this, but it comes out when we have to make particularly hard decisions.

[…]

Critically, she says, when we make our decision, we get to justify it with a reason.

Whether we prefer beige or fluorescent colors, the countryside or a certain set of job activities—these are not objectively measurable. There is no ranking system anywhere that says beige is better than pink and that living in the countryside is better than a certain job. If there were, all humans would be making the same decisions. Instead, we each invent reasons to make our decisions (and when societies do this together, we create our laws, social norms and ethical systems.)

But a machine could never do this…right? You’d be surprised. Google recently announced, for example, that it had built an AI that can learn and master video games. The program isn’t given commands but instead plays games again and again, learning from experience. Some have speculated that such a development would be useful for a robot car.

How might this work?

Instead of a robot car making a random decision, outsourcing its decision or reverting to pre-programmed values to make a decision—it could instead scour the cloud processing immense amounts of data and patterns based on local laws, past legal rulings, the values of the people and society around it, and the consequences it observes from various other similar decision-making processes over time.In short, robot cars, like humans, would use experience to invent their own reasons.

What is fascinating about Chang’s talk, is that she says when humans engage in such a reckoning process—of inventing and choosing one’s reasons during hard times—we view it as one of the highest forms of human development.

Asking others to make decisions for us, or leaving life to chance, is a form of drifting. But inventing and choosing our own reasons during hard times is referred to as building one’s character, taking a stand, taking responsibility for one’s own actions, defining who one is, and becoming the author of one’s own life.

 

Ref: Can We Trust Robot Cars to Make Hard Choices? – SingularityHub

Google and Elon Musk to Decide What Is Good for Humanity

THE RECENTLY PUBLISHED Future of Life Institute (FLI) letter “Research Priorities for Robust and Beneficial Artificial Intelligence,” signed by hundreds of AI researchers in addition to Elon Musk and Stephen Hawking, many representing government regulators and some sitting on committees with names like “Presidential Panel on Long Term AI future,” offers a program professing to protect the mankind from the threat of “super-intelligent AIs.”

[…]

Which brings me back to the FLI letter. While individual investors have every right to lose their assets, the problem gets much more complicated when government regulators are involved. Here are the the main claims of the letter I have a problem with (quotes from the letter in italics):

– Statements like: “There is a broad consensus that AI research is progressing steadily,” even “progressing dramatically” (Google Brain signatories on FLI web site), are just not true. In the last 50 years there has been very little AI progress (more stasis like than “steady”) and not a single major AI based breakthrough commercial product, unless you count iPhone’s infamous Siri. In short, despite the overwhelming media push, AI simply does not work.

– “AI systems must do what we want them to do” begs the question of who is “we?” There are 92 references included in this letter, all of them from CS, AI and political scientists, there are many references to approaching, civilization threatening “singularity,” several references to possibilities for “mind uploading,” but not a single reference from a biologist or a neural scientist. To call such an approach to study of intellect “interdisciplinary” is just not credible.

– “Identify research directions that can maximize societal benefits” is outright chilling. Again, who decides whether research is “socially desirable?”

– “AI super-intelligence will not act with human wishes and will threaten the humanity” is just a cover for justification of the attempted power grab of AI group over the competing approaches to study of intellect.

[…]

AI researchers, on the other hand, start with the a priori assumption that the brain is quite simple, really just a carbon version of a Von Neumann CPU. As Google Brain AI researcher and FLI letter signatory, Illya Sutskever, recently told me, “[The] brain absolutely is just a CPU and further study of brain would be a waste of my time.” This is almost word for word repetition of famous statement of Noam Chomsky made decades ago “predicting” the existence of a language “generator” in the brain.

FLI letter signatories say: Do not to worry, “we” will allow “good” AI and “identify research directions” in order to maximize societal benefits and eradicate diseases and poverty. I believe that it would be precisely the newly emerging neural science groups which would suffer if AI is allowed to regulate research direction in this field. Why should “evidence” like this allow AI scientists to control what biologists and neural scientists can and cannot do?

Ref: Google and Elon Musk to Decide What Is Good for Humanity – Wired