Category Archives: T – domestic

Why Self-Driving Cars Must Be Programmed to Kill

And that raises some difficult issues. How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random? (See also “How to Help Self-Driving Cars Make Ethical Decisions.”)

The answers to these ethical questions are important because they could have a big impact on the way self-driving cars are accepted in society. Who would buy a car programmed to sacrifice the owner?

So can science help? Today, we get an answer of sorts thanks to the work of Jean-Francois Bonnefon at the Toulouse School of Economics in France and a couple of pals. These guys say that even though there is no right or wrong answer to these questions, public opinion will play a strong role in how, or even whether, self-driving cars become widely accepted.

So they set out to discover the public’s opinion using the new science of experimental ethics. This involves posing ethical dilemmas to a large number of people to see how they respond. And the results make for interesting, if somewhat predictable, reading. “Our results provide but a first foray into the thorny issues raised by moral algorithms for autonomous vehicles,” they say.

Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?

One way to approach this kind of problem is to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing 10.

But that approach may have other consequences. If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation.

[…]

So these guys posed these kinds of ethical dilemmas to several hundred workers on Amazon’s Mechanical Turk to find out what they thought. The participants were given scenarios in which one or more pedestrians could be saved if a car were to swerve into a barrier, killing its occupant or a pedestrian.

At the same time, the researchers varied some of the details such as the actual number of pedestrians that could be saved, whether the driver or an on-board computer made the decision to swerve and whether the participants were asked to imagine themselves as the occupant or an anonymous person.

The results are interesting, if predictable. In general, people are comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll.

This utilitarian approach is certainly laudable but the participants were willing to go only so far. “[Participants] were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves,” conclude Bonnefon and co.

 

Ref: Why Self-Driving Cars Must Be Programmed to Kill – MIT Technology Review

Hackers Remotely Kill a Jeep on the Highway

The Jeep’s strange behavior wasn’t entirely unexpected. I’d come to St. Louis to be Miller and Valasek’s digital crash-test dummy, a willing subject on whom they could test the car-hacking research they’d been doing over the past year. The result of their work was a hacking technique—what the security industry calls a zero-day exploit—that can target Jeep Cherokees and give the attacker wireless control, via the Internet, to any of thousands of vehicles. Their code is an automaker’s nightmare: software that lets hackers send commands through the Jeep’s entertainment system to its dashboard functions, steering, brakes, and transmission, all from a laptop that may be across the country.

[…]

Immediately my accelerator stopped working. As I frantically pressed the pedal and watched the RPMs climb, the Jeep lost half its speed, then slowed to a crawl. This occurred just as I reached a long overpass, with no shoulder to offer an escape. The experiment had ceased to be fun.

At that point, the interstate began to slope upward, so the Jeep lost more momentum and barely crept forward. Cars lined up behind my bumper before passing me, honking. I could see an 18-wheeler approaching in my rearview mirror. I hoped its driver saw me, too, and could tell I was paralyzed on the highway.

[…]

All of this is possible only because Chrysler, like practically all carmakers, is doing its best to turn the modern automobile into a smartphone. Uconnect, an Internet-connected computer feature in hundreds of thousands of Fiat Chrysler cars, SUVs, and trucks, controls the vehicle’s entertainment and navigation, enables phone calls, and even offers a Wi-Fi hot spot. And thanks to one vulnerable element, which Miller and Valasek won’t identify until their Black Hat talk, Uconnect’s cellular connection also lets anyone who knows the car’s IP address gain access from anywhere in the country. “From an attacker’s perspective, it’s a super nice vulnerability,” Miller says.

Ref: Hackers Remotely Kill a Jeep on the Highway—With Me in It – Wired

 

Inside the Fake Town Built Just for Self-Driving Cars

“Mcity,” which officially opened Monday, is a 32-acre faux metropolis designed specifically to test automated and connected vehicle tech. It’s got several miles of two-, three-, and four-lane roads, complete with intersections, traffic signals, and signs. Benches and streetlights line the sidewalks separating building facades from the streets. It’s like an elaborate Hollywood set.

[…]

This is about more than safety, too. Mcity allows engineers to test a wide range of conditions that aren’t easily created in the wild. They can test vehicles on different surfaces (like brick, dirt, and grass) and see how their systems handle roundabouts and underpasses. They can erect construction barriers, spray graffiti on road signs, and work with faded lane lines, to see how autonomous tech reacts to real-world conditions.

[…]

Such a site is a great tool, but the technology must also prove itself on public roads. A simulated environment has a fundamental limitation: You can only test situations you think up. Experience—and dash cams—have taught us our roads can be crazy in ways we never think to expect. Sinkholes can appear in the road, tsunamis can rage across the land, roadside buildings can collapse and send debris flying. Humans can be even harder to anticipate. But even every day actions, the things we do almost subconsciously.

Ref: Inside the Fake Town Built Just for Self-Driving Cars – Wired

Should a Driverless Car Decide Who Lives or Dies?

The industry is promising a glittering future of autonomous vehicles moving in harmony like schools of fish. That can’t happen, however, until carmakers answer the kinds of thorny philosophical questions explored in science fiction since Isaac Asimov wrote his robot series last century. For example, should an autonomous vehicle sacrifice its occupant by swerving off a cliff to avoid killing a school bus full of children?

Auto executives, finding themselves in unfamiliar territory, have enlisted ethicists and philosophers to help them navigate the shades of gray. Ford, General Motors, Audi, Renault and Toyota are all beating a path to Stanford University’s Center for Automotive Research, which is programming cars to make ethical decisions and see what happens.

“This issue is definitely in the crosshairs,” says Chris Gerdes, who runs the lab and recently met with the chief executives of Ford and GM to discuss the topic. “They’re very aware of the issues and the challenges because their programmers are actively trying to make these decisions today.”

[…]

That’s why we shouldn’t leave those decisions up to robots, says Wendell Wallach, author of “A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control.”

“The way forward is to create an absolute principle that machines do not make life and death decisions,” says Wallach, a scholar at the Interdisciplinary Center for Bioethics at Yale University. “There has to be a human in the loop. You end up with a pretty lawless society if people think they won’t be held responsible for the actions they take.”

Ref: Should a Driverless Car Decide Who Lives or Dies? – Bloomsberg

Natural Police

To defeat corruption, we need to understand why it arises in the first place. For that, we need game theory. A ‘game’ is a stylised scenario in which each player receives a pay‑off determined by the strategies chosen by all players. There’s also a variant of game theory that deals with so-called evolutionary games. In that kind of scenario, we imagine a population of self-reproducing strategies that get to multiply depending on the pay‑offs they achieve. A strategy is said to be ‘evolutionarily stable’ if, once it is widely adopted, no rival can spread by natural selection.

The archetypal co‑operation game is the Prisoner’s Dilemma. Imagine that two prisoners, each held in isolation, are given a chance to rat on the other. If only one takes the bait, he gets a reduced prison sentence while the other gets a longer one. But if both take it, neither gets a reduction. In other words, mutual co‑operation (saying nothing) provides a higher reward than mutual defection (ratting on your partner), but the best reward comes from defecting while your partner tries to co‑operate with you, while the lowest pay‑off comes from trying to co‑operate with your partner while he stabs you in the back.

The most obvious evolutionarily stable strategy in this game is simple: always defect. If your partner co‑operates, you exploit his naïveté, and if he defects, you will still do better than if you had co‑operated. So there is no possible strategy that can defeat the principle ‘always act like an untrusting jerk’.

At this point, you could be forgiven for thinking that game theory is both appalling and ridiculous. Co‑operation clearly pays off. Indeed, if you make normal people (ie people who are not economics students) play the Prisoner’s Dilemma, they almost never defect. And not just people. Rats will go out of their way to free a trapped cage-mate; rhesus monkeys will starve for days rather than shock a companion. Even bacteria are capable of supreme acts of altruism.

This trend toward biological niceness has been something of an embarrassment for biology. In fact, the task of finding ways around the more dismal conclusions of game theory has become a sub-disciplinary cottage industry. In the Prisoner’s Dilemma, for example, it turns out that when players are allowed to form relationships, co‑operators can beat defectors simply by avoiding them. That’s fine in small societies, but it leaves us with the problem of co‑operation in large groups, where interactions among strangers are inevitable.

Game theory (as well as common sense) tells us that policing can help. Just grant some individuals the power and inclination to punish defectors and the attractions of cheating immediately look less compelling. This is a good first pass at a solution: not for nothing do we find police-like entities among ants, bees, wasps, and within our own bodies. But that just leads us back to the problem of corruption. What happens if the police themselves become criminals, using their unusual powers for private profit? Who watches the watchers?

In 2010, two researchers at the University of Tennessee built a game-theoretical model to examine just this problem. The results, published by Francisco Úbeda and Edgar Duéñez-Guzmán in a paper called ‘Power and Corruption’, were, frankly, depressing. Nothing, they concluded, would stop corruption from dominating an evolving police system. Once it arose, it would remain stable under almost any circumstances. The only silver lining was that the bad police could still suppress defection in the rest of society. The result was a mixed population of gullible sheep and hypocritical overlords. Net wellbeing does end up somewhat higher than it would be if everyone acted entirely selfishly, but all in all you end up with a society rather like that of the tree wasps.

Ref: Natural police – Aeon

Google’s Plan to Eliminate Human Driving in 5 Years

There are three significant downsides to Google’s approach. First, the goal of delivering a car that only drives itself raises the difficulty bar. There’s no human backup, so the car had better be able to handle every situation it encounters. That’s what Google calls “the .001 percent of things that we need to be prepared for even if we’ve never seen them before in our real world driving.” And if dash cam videos teach us anything, it’s that our roads are crazy places. People jump onto highways. Cows fall out of trucks. Tsunamis strike and buildings explode.

The automakers have to deal with those same edge cases, and the human may not be of much help in a split second situation. But the timeline is different: Automakers acknowledge this problem, but they’re moving slowly and carefully. Google plans to have everything figured out in just a few years, which makes the challenge that much harder to overcome.

[…]

The deadly crash of Asiana Airlines Flight 214 at San Francisco International Airport in July 2013 highlights a lesson from the aviation industry. The airport’s glide scope indicator, which helps line up the plane for landing, wasn’t functioning, so the pilots were told to use visual approaches. The crew was experienced and skilled, but rarely flew the Boeing 777 manually,Bloomberg reported. The plane came in far too low and slow, hitting the seawall that separates the airport from the bay. The pilots “mismanaged the airplane’s descent,” the National Transportation Safety Board found.

Asiana, in turn, blamed badly designed software. “There were inconsistencies in the aircraft’s automation logic that led to the unexpected disabling of airspeed protection without adequate warning to the flight crew,” it said in a filing to the NTSB. “The low airspeed alerting system did not provide adequate time for recovery; and air traffic control instructions and procedures led to an excessive pilot workload during the final approach.”

Ref: Google’s Plan to Eliminate Human Driving in 5 Years – Wired

The CyberSyn Revolution

The state plays an important role in shaping the relationship between labor and technology, and can push for the design of systems that benefit ordinary people. It can also have the opposite effect. Indeed, the history of computing in the US context has been tightly linked to government command, control, and automation efforts.

But it does not have to be this way. Consider how the Allende government approached the technology-labor question in the design of Project Cybersyn. Allende made raising employment central both to his economic plan and his overall strategy to help Chileans. His government pushed for new forms of worker participation on the shop floor and the integration of worker knowledge in economic decision-making.

This political environment allowed Beer, the British cybernetician assisting Chile, to view computer technology as a way to empower workers. In 1972, he published a report for the Chilean government that proposed giving Chilean workers, not managers or government technocrats, control of Project Cybersyn. More radically, Beer envisioned a way for Chile’s workers to participate in Cybersyn’s design.

He recommended that the government allow workers — not engineers — to build the models of the state-controlled factories because they were best qualified to understand operations on the shop floor. Workers would thus help design the system that they would then run and use. Allowing workers to use both their heads and their hands would limit how alienated they felt from their labor.

[…]

But Beer showed an ability to envision how computerization in a factory setting might work toward an end other than speed-ups and deskilling — the results of capitalist development that labor scholars such as Harry Braverman witnessed in the United States, where the government did not have the same commitment to actively limiting unemployment or encouraging worker participation.

[…]

We need to be thinking in terms of systems rather than technological quick fixes. Discussions about smart cities, for example, regularly focus on better network infrastructures and the use of information and communication technologies such as integrated sensors, mobile phone apps, and online services. Often, the underlying assumption is that such interventions will automatically improve the quality of urban life by making it easier for residents to access government services and provide city government with data to improve city maintenance.

But this technological determinism doesn’t offer a holistic understanding of how such technologies might negatively impact critical aspects of city life. For example, the sociologist Robert Hollands argues that tech-centered smart-city initiatives might create an influx of technologically literate workers and exacerbate the displacement of other workers. They also might divert city resources to the building of computer infrastructures and away from other important areas of city life.

[…]

We must resist the kind of apolitical “innovation determinism” that sees the creation of the next app, online service, or networked device as the best way to move society forward. Instead, we should push ourselves to think creatively of ways to change the structure of our organizations, political processes, and societies for the better and about how new technologies might contribute to such efforts.

 

Ref: The Cybersyn Revolution – Jacobin

Americans Want Self-Driving Cars for the Cheaper Insurance

Of the 1,500 US drivers the Boston Group surveyed in September, 55 percent said they “likely” or “very likely” would buy a semi-autonomous car (one capable of handling some, but not all, highway and urban traffic). What’s more, 44 percent said they would, in 10 years, buy a fully autonomous vehicle.

What’s most surprising about the survey isn’t that so many people are interested in this technology, but why they’re interested.

The leading reason people are considering semi-autonomous vehicles isn’t greater safety, improved fuel efficiency, or increased productivity—the upsides most frequently associated with the technology. Such things were a factor, but the biggest appeal is lower insurance costs. Safety was the leading reason people were interested in a fully autonomous ride, with cheaper insurance costs in second place.

[…]

That’s why “a vast number of insurance companies” are exploring discounts for those semiautonomous features, Mosquet says. For example, drivers who purchase a new Volvo with the pedestrian protection tech qualify for a lower premium. “The cost to [the insurer] of pedestrian accidents is actually significant, and they’re going to do everything they can to reduce this type of incident.” That’s already started in Europe and is spreading to the US.

Ref: Americans Want Self-Driving Cars for the Cheaper Insurance – Wired

What Crazy Dash Cam Videos Teach Us About Self-Driving Cars

THE FIRST SELF-DRIVING CARS are expected to hit showrooms within five years. Their autonomous capabilities will be largely limited to highways, where there aren’t things like pedestrians and cyclists to deal with, and you won’t fully cede control. As long as the road is clear, the car’s in charge. But when all that computing power senses trouble, like construction or rough weather, it will have you take the wheel.

The problem is, that switch will not—because it cannot—happen immediately.

The primary benefit of autonomous technology is to increase safety and decrease congestion. A secondary upside to letting the car do the driving is letting you can focus on crafting pithy tweets, texting, or do anything else you’d rather be doing. And while any rules the feds concoct likely will prohibit catching Zs behind the wheel, there’s no arguing that someone won’t try it.

Audi’s testing has shown it takes an average of 3 to 7 seconds—and as long as 10—for a driver to snap to attention and take control, even when prompted by flashing lights and verbal warnings. This means engineers must ensure an autonomous Audi can handle any situation for at least that long. This is not insignificant, because a lot can happen in 10 seconds, especially when a vehicle is moving more than 100 feet per second.

[…]

The point is, the world’s highways are a crazy, unpredictable place where anything can happen. And they don’t even have the pedestrians and cyclists and buses and taxis and delivery vans and countless other things that make autonomous driving in an urban setting so tricky. So how do you prepare for every situation imaginable?

Ref: What Crazy Dash Cam Videos Teach Us About Self-Driving Cars  – Wired