Category Archives: W – future

Why Self-Driving Cars Must Be Programmed to Kill

And that raises some difficult issues. How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random? (See also “How to Help Self-Driving Cars Make Ethical Decisions.”)

The answers to these ethical questions are important because they could have a big impact on the way self-driving cars are accepted in society. Who would buy a car programmed to sacrifice the owner?

So can science help? Today, we get an answer of sorts thanks to the work of Jean-Francois Bonnefon at the Toulouse School of Economics in France and a couple of pals. These guys say that even though there is no right or wrong answer to these questions, public opinion will play a strong role in how, or even whether, self-driving cars become widely accepted.

So they set out to discover the public’s opinion using the new science of experimental ethics. This involves posing ethical dilemmas to a large number of people to see how they respond. And the results make for interesting, if somewhat predictable, reading. “Our results provide but a first foray into the thorny issues raised by moral algorithms for autonomous vehicles,” they say.

Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?

One way to approach this kind of problem is to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing 10.

But that approach may have other consequences. If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation.

[…]

So these guys posed these kinds of ethical dilemmas to several hundred workers on Amazon’s Mechanical Turk to find out what they thought. The participants were given scenarios in which one or more pedestrians could be saved if a car were to swerve into a barrier, killing its occupant or a pedestrian.

At the same time, the researchers varied some of the details such as the actual number of pedestrians that could be saved, whether the driver or an on-board computer made the decision to swerve and whether the participants were asked to imagine themselves as the occupant or an anonymous person.

The results are interesting, if predictable. In general, people are comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll.

This utilitarian approach is certainly laudable but the participants were willing to go only so far. “[Participants] were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves,” conclude Bonnefon and co.

 

Ref: Why Self-Driving Cars Must Be Programmed to Kill – MIT Technology Review

Should a Driverless Car Decide Who Lives or Dies?

The industry is promising a glittering future of autonomous vehicles moving in harmony like schools of fish. That can’t happen, however, until carmakers answer the kinds of thorny philosophical questions explored in science fiction since Isaac Asimov wrote his robot series last century. For example, should an autonomous vehicle sacrifice its occupant by swerving off a cliff to avoid killing a school bus full of children?

Auto executives, finding themselves in unfamiliar territory, have enlisted ethicists and philosophers to help them navigate the shades of gray. Ford, General Motors, Audi, Renault and Toyota are all beating a path to Stanford University’s Center for Automotive Research, which is programming cars to make ethical decisions and see what happens.

“This issue is definitely in the crosshairs,” says Chris Gerdes, who runs the lab and recently met with the chief executives of Ford and GM to discuss the topic. “They’re very aware of the issues and the challenges because their programmers are actively trying to make these decisions today.”

[…]

That’s why we shouldn’t leave those decisions up to robots, says Wendell Wallach, author of “A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control.”

“The way forward is to create an absolute principle that machines do not make life and death decisions,” says Wallach, a scholar at the Interdisciplinary Center for Bioethics at Yale University. “There has to be a human in the loop. You end up with a pretty lawless society if people think they won’t be held responsible for the actions they take.”

Ref: Should a Driverless Car Decide Who Lives or Dies? – Bloomsberg

Natural Police

To defeat corruption, we need to understand why it arises in the first place. For that, we need game theory. A ‘game’ is a stylised scenario in which each player receives a pay‑off determined by the strategies chosen by all players. There’s also a variant of game theory that deals with so-called evolutionary games. In that kind of scenario, we imagine a population of self-reproducing strategies that get to multiply depending on the pay‑offs they achieve. A strategy is said to be ‘evolutionarily stable’ if, once it is widely adopted, no rival can spread by natural selection.

The archetypal co‑operation game is the Prisoner’s Dilemma. Imagine that two prisoners, each held in isolation, are given a chance to rat on the other. If only one takes the bait, he gets a reduced prison sentence while the other gets a longer one. But if both take it, neither gets a reduction. In other words, mutual co‑operation (saying nothing) provides a higher reward than mutual defection (ratting on your partner), but the best reward comes from defecting while your partner tries to co‑operate with you, while the lowest pay‑off comes from trying to co‑operate with your partner while he stabs you in the back.

The most obvious evolutionarily stable strategy in this game is simple: always defect. If your partner co‑operates, you exploit his naïveté, and if he defects, you will still do better than if you had co‑operated. So there is no possible strategy that can defeat the principle ‘always act like an untrusting jerk’.

At this point, you could be forgiven for thinking that game theory is both appalling and ridiculous. Co‑operation clearly pays off. Indeed, if you make normal people (ie people who are not economics students) play the Prisoner’s Dilemma, they almost never defect. And not just people. Rats will go out of their way to free a trapped cage-mate; rhesus monkeys will starve for days rather than shock a companion. Even bacteria are capable of supreme acts of altruism.

This trend toward biological niceness has been something of an embarrassment for biology. In fact, the task of finding ways around the more dismal conclusions of game theory has become a sub-disciplinary cottage industry. In the Prisoner’s Dilemma, for example, it turns out that when players are allowed to form relationships, co‑operators can beat defectors simply by avoiding them. That’s fine in small societies, but it leaves us with the problem of co‑operation in large groups, where interactions among strangers are inevitable.

Game theory (as well as common sense) tells us that policing can help. Just grant some individuals the power and inclination to punish defectors and the attractions of cheating immediately look less compelling. This is a good first pass at a solution: not for nothing do we find police-like entities among ants, bees, wasps, and within our own bodies. But that just leads us back to the problem of corruption. What happens if the police themselves become criminals, using their unusual powers for private profit? Who watches the watchers?

In 2010, two researchers at the University of Tennessee built a game-theoretical model to examine just this problem. The results, published by Francisco Úbeda and Edgar Duéñez-Guzmán in a paper called ‘Power and Corruption’, were, frankly, depressing. Nothing, they concluded, would stop corruption from dominating an evolving police system. Once it arose, it would remain stable under almost any circumstances. The only silver lining was that the bad police could still suppress defection in the rest of society. The result was a mixed population of gullible sheep and hypocritical overlords. Net wellbeing does end up somewhat higher than it would be if everyone acted entirely selfishly, but all in all you end up with a society rather like that of the tree wasps.

Ref: Natural police – Aeon

Google’s Plan to Eliminate Human Driving in 5 Years

There are three significant downsides to Google’s approach. First, the goal of delivering a car that only drives itself raises the difficulty bar. There’s no human backup, so the car had better be able to handle every situation it encounters. That’s what Google calls “the .001 percent of things that we need to be prepared for even if we’ve never seen them before in our real world driving.” And if dash cam videos teach us anything, it’s that our roads are crazy places. People jump onto highways. Cows fall out of trucks. Tsunamis strike and buildings explode.

The automakers have to deal with those same edge cases, and the human may not be of much help in a split second situation. But the timeline is different: Automakers acknowledge this problem, but they’re moving slowly and carefully. Google plans to have everything figured out in just a few years, which makes the challenge that much harder to overcome.

[…]

The deadly crash of Asiana Airlines Flight 214 at San Francisco International Airport in July 2013 highlights a lesson from the aviation industry. The airport’s glide scope indicator, which helps line up the plane for landing, wasn’t functioning, so the pilots were told to use visual approaches. The crew was experienced and skilled, but rarely flew the Boeing 777 manually,Bloomberg reported. The plane came in far too low and slow, hitting the seawall that separates the airport from the bay. The pilots “mismanaged the airplane’s descent,” the National Transportation Safety Board found.

Asiana, in turn, blamed badly designed software. “There were inconsistencies in the aircraft’s automation logic that led to the unexpected disabling of airspeed protection without adequate warning to the flight crew,” it said in a filing to the NTSB. “The low airspeed alerting system did not provide adequate time for recovery; and air traffic control instructions and procedures led to an excessive pilot workload during the final approach.”

Ref: Google’s Plan to Eliminate Human Driving in 5 Years – Wired

The CyberSyn Revolution

The state plays an important role in shaping the relationship between labor and technology, and can push for the design of systems that benefit ordinary people. It can also have the opposite effect. Indeed, the history of computing in the US context has been tightly linked to government command, control, and automation efforts.

But it does not have to be this way. Consider how the Allende government approached the technology-labor question in the design of Project Cybersyn. Allende made raising employment central both to his economic plan and his overall strategy to help Chileans. His government pushed for new forms of worker participation on the shop floor and the integration of worker knowledge in economic decision-making.

This political environment allowed Beer, the British cybernetician assisting Chile, to view computer technology as a way to empower workers. In 1972, he published a report for the Chilean government that proposed giving Chilean workers, not managers or government technocrats, control of Project Cybersyn. More radically, Beer envisioned a way for Chile’s workers to participate in Cybersyn’s design.

He recommended that the government allow workers — not engineers — to build the models of the state-controlled factories because they were best qualified to understand operations on the shop floor. Workers would thus help design the system that they would then run and use. Allowing workers to use both their heads and their hands would limit how alienated they felt from their labor.

[…]

But Beer showed an ability to envision how computerization in a factory setting might work toward an end other than speed-ups and deskilling — the results of capitalist development that labor scholars such as Harry Braverman witnessed in the United States, where the government did not have the same commitment to actively limiting unemployment or encouraging worker participation.

[…]

We need to be thinking in terms of systems rather than technological quick fixes. Discussions about smart cities, for example, regularly focus on better network infrastructures and the use of information and communication technologies such as integrated sensors, mobile phone apps, and online services. Often, the underlying assumption is that such interventions will automatically improve the quality of urban life by making it easier for residents to access government services and provide city government with data to improve city maintenance.

But this technological determinism doesn’t offer a holistic understanding of how such technologies might negatively impact critical aspects of city life. For example, the sociologist Robert Hollands argues that tech-centered smart-city initiatives might create an influx of technologically literate workers and exacerbate the displacement of other workers. They also might divert city resources to the building of computer infrastructures and away from other important areas of city life.

[…]

We must resist the kind of apolitical “innovation determinism” that sees the creation of the next app, online service, or networked device as the best way to move society forward. Instead, we should push ourselves to think creatively of ways to change the structure of our organizations, political processes, and societies for the better and about how new technologies might contribute to such efforts.

 

Ref: The Cybersyn Revolution – Jacobin

The World’s First Self-Driving Semi-Truck Hits the Road

The truck in question is the Freightliner Inspiration, a teched-up version of the Daimler 18-wheeler sold around the world. And according to Daimler, which owns Mercedes-Benz, it will make long-haul road transportation safer, cheaper, and better for the planet.

[…]

Humans Don’t Want These Jobs

Another point in favor of giving robots control is the serious and worsening shortage of humans willing to take the wheel. The lack of qualified drivers has created a “capacity crisis,” according to an October 2014 report by the American Transportation Research Institute. The American Trucking Associations predicts the industry could be short 240,000 drivers by 2022. (There are roughly three million full-time drivers in the US.)

[…]

Killing the Human Driver

The way to handle that growth isn’t to convince more people to become long haul truckers. It’s to reduce, and eventually eliminate, the role of the human. Let the trucks drive themselves, and you can improve safety, meet increased demand, and save time and fuel.

The safety benefits of autonomous features are obvious. The machine doesn’t get tired, stressed, angry, or distracted. And because trucks spend the vast majority of their time on the highway, the tech doesn’t have to clear the toughest hurdle: handling complex urban environments with pedestrians, cyclists, and the like. If you can prove the vehicles are safer, you could make them bigger, and thus more efficient at transporting all the crap we buy on Amazon.

[…]

The end game is eliminating the need for human drivers, at least for highway driving. (An autonomous truck could exit the interstate near the end of its journey, park in a designated lot, and wait for a human to come drive it on surface streets to its destination.)

// Interesting comments

The reason for the driver shortage is partly due to pay and benefits. If you want a driver to be away from his/her family for weeks at a time you have to pay them enough to make it worth the loss of family time. Also partly due to unrealistic expectations for delivery times by dispatchers, which adds a lot of stress to a job that already has enough of that already. So yeah I can see where companies would love a driverless semi because it would eliminate them having to consider the human/personal considerations. I haul fuel locally so not much chance of this technology replacing me, but I hate to see more jobs lost.

There are 3.5 million truck drivers in US alone, not to mention countless other transportation related jobs. Those are mostly average to decent paying jobs. Think for a second about the far reaching consequences of elimination of these jobs and secondary jobs that are also related. Further we are looking at elimination most any human-related job in the next 25 years. Do you truly feel it is a good-progress? Is it humane and progressive to live in a world where less than one 0.1% of people enslaves the rest?

Ferguson or Baltimore is not a fluke…it’s not just about racial tension. It is a fabric of our society starting to tear. Where people feel powerless and disenfranchised, the only option to be heard thats left is often violence. Whats happening there is just a beginning of what is about to come next.

One thing that has always bothered me is they always say “A truck can’t stop as fast as as a car can”, and yet we accept that excuse for a ratio of tires, weight, and lives lost due to inadequate breaking. Everything has improved, but we have stopped making progress in trying to stop a loaded truck faster.

Imagine telling the public the truth. It’s too expensive to add tires to cut breaking distance, or haul lighter loads. (or use trains).

 

Ref: The World’s First Self-Driving Semi-Truck Hits the Road – Wired

Americans Want Self-Driving Cars for the Cheaper Insurance

Of the 1,500 US drivers the Boston Group surveyed in September, 55 percent said they “likely” or “very likely” would buy a semi-autonomous car (one capable of handling some, but not all, highway and urban traffic). What’s more, 44 percent said they would, in 10 years, buy a fully autonomous vehicle.

What’s most surprising about the survey isn’t that so many people are interested in this technology, but why they’re interested.

The leading reason people are considering semi-autonomous vehicles isn’t greater safety, improved fuel efficiency, or increased productivity—the upsides most frequently associated with the technology. Such things were a factor, but the biggest appeal is lower insurance costs. Safety was the leading reason people were interested in a fully autonomous ride, with cheaper insurance costs in second place.

[…]

That’s why “a vast number of insurance companies” are exploring discounts for those semiautonomous features, Mosquet says. For example, drivers who purchase a new Volvo with the pedestrian protection tech qualify for a lower premium. “The cost to [the insurer] of pedestrian accidents is actually significant, and they’re going to do everything they can to reduce this type of incident.” That’s already started in Europe and is spreading to the US.

Ref: Americans Want Self-Driving Cars for the Cheaper Insurance – Wired

Google and Elon Musk to Decide What Is Good for Humanity

THE RECENTLY PUBLISHED Future of Life Institute (FLI) letter “Research Priorities for Robust and Beneficial Artificial Intelligence,” signed by hundreds of AI researchers in addition to Elon Musk and Stephen Hawking, many representing government regulators and some sitting on committees with names like “Presidential Panel on Long Term AI future,” offers a program professing to protect the mankind from the threat of “super-intelligent AIs.”

[…]

Which brings me back to the FLI letter. While individual investors have every right to lose their assets, the problem gets much more complicated when government regulators are involved. Here are the the main claims of the letter I have a problem with (quotes from the letter in italics):

– Statements like: “There is a broad consensus that AI research is progressing steadily,” even “progressing dramatically” (Google Brain signatories on FLI web site), are just not true. In the last 50 years there has been very little AI progress (more stasis like than “steady”) and not a single major AI based breakthrough commercial product, unless you count iPhone’s infamous Siri. In short, despite the overwhelming media push, AI simply does not work.

– “AI systems must do what we want them to do” begs the question of who is “we?” There are 92 references included in this letter, all of them from CS, AI and political scientists, there are many references to approaching, civilization threatening “singularity,” several references to possibilities for “mind uploading,” but not a single reference from a biologist or a neural scientist. To call such an approach to study of intellect “interdisciplinary” is just not credible.

– “Identify research directions that can maximize societal benefits” is outright chilling. Again, who decides whether research is “socially desirable?”

– “AI super-intelligence will not act with human wishes and will threaten the humanity” is just a cover for justification of the attempted power grab of AI group over the competing approaches to study of intellect.

[…]

AI researchers, on the other hand, start with the a priori assumption that the brain is quite simple, really just a carbon version of a Von Neumann CPU. As Google Brain AI researcher and FLI letter signatory, Illya Sutskever, recently told me, “[The] brain absolutely is just a CPU and further study of brain would be a waste of my time.” This is almost word for word repetition of famous statement of Noam Chomsky made decades ago “predicting” the existence of a language “generator” in the brain.

FLI letter signatories say: Do not to worry, “we” will allow “good” AI and “identify research directions” in order to maximize societal benefits and eradicate diseases and poverty. I believe that it would be precisely the newly emerging neural science groups which would suffer if AI is allowed to regulate research direction in this field. Why should “evidence” like this allow AI scientists to control what biologists and neural scientists can and cannot do?

Ref: Google and Elon Musk to Decide What Is Good for Humanity – Wired

The Ethical Dangers of AI

The AI community has begun to take the downside risk of AI very seriously. I attended a Future of AI workshop in January of 2015 in Puerto Rico sponsored by the Future of Life Institute. The ethical consequences of AI were front and center. There are four key thrusts the AI community is focusing research on to get better outcomes with future AIs:

Verification – Research into methods of guaranteeing that the systems we build actually meet the specifications we set.

Validation – Research into ensuring that the specifications, even if met, do not result in unwanted behaviors and consequences.

Security – Research on building systems that are increasingly difficult to tamper with – internally or externally.

Control – Research to ensure that we can interrupt AI systems (even with other AIs) if and when  something goes wrong, and get them back on track.

These aren’t just philosophical or ethical considerations, they are system design issues. I think we’ll see a greater focus on these kinds of issues not just in AI, but in software generally as we develop systems with more power and complexity.

Will AIs ever be completely risk free? I don’t think so. Humans are not risk free! There is a predator/prey aspect to this in terms of malicious groups who choose to develop these technologies in harmful ways. However, the vast majority of people, including researchers and developers in AI, are not malicious. Most of the world’s intellect and energy will be spent on building society up, not tearing it down. In spite of this, we need to do a better job anticipating the potential consequences of our technologies, and being proactive about creating the outcomes that improve human health and the environment. That is a particular challenge with AI technology that can improve itself. Meeting this challenge will make it much more likely that we can succeed in reaching for the stars.

Ref: Interview: Neil Jacobstein Discusses Future of Jobs, Universal Basic Income and the Ethical Dangers of AI – SingularityHub