Scientists Call for a Ban

The International Committee for Robot Arms Control (ICRAC), a founder of the Campaign to Stop Killer Robots, has issued a statement endorsed by more than 270 engineers, computing and artificial intelligence experts, roboticists, and professionals from related disciplines that calls for a ban on fully autonomous weapons. In the statement272 experts from 37 countries say that, “given the limitations and unknown future risks of autonomous robot weapons technology, we call for a prohibition on their development and deployment. Decisions about the application of violent force must not be delegated to machines.

The signatories question the notion that robot weapons could meet legal requirements for the use of force: “given the absence of clear scientific evidence that robot weapons have, or are likely to have in the foreseeable future, the functionality required for accurate target identification, situational awareness or decisions regarding the proportional use of force.” The experts ask how devices controlled by complex algorithms will interact, warning: “Such interactions could create unstable and unpredictable behavior, behavior that could initiate or escalate conflicts, or cause unjustifiable harm to civilian populations.”


Ref: Scientists call for a ban – Campaign to stop killer robots

Prison Parole Boards are Turning from Intuition to Computer Assessments

Prison parole boards are turning from intuition to computer assessments as states look to cut costs at correctional facilities, reports The Wall Street Journal. At least 15 states have begun requiring some type of risk assessment tool to help parole boards judge whether or not an inmate should be released. Many take the form of software, which may consider 50 or 100 different factors about a person before returning whether it thinks they’d be likely to return to prison during a parole period.

These automated systems make their decisions off of details such as an inmate’s age during their first arrest, whether they believe their conviction to be fair, and their level of education. But the systems aren’t perfect. One tool, called Compas, should have its decisions overruled between 8 and 15 percent of the time, its developer tells the Journal.

Though the Journal says that these programs have become more common in the past several years, some states, like Texas, have been using them for far longer. The tools aim to better select who should be released on parole, while helping prisons cut down on their ballooning cost of operations.

The software is able to do that by besting the decisions of traditional parole boards: parole boards reportedly tend toward considering factors like whether a prisoner shows remorse and the severity of their crime, but those details don’t always align with how well they’ll do on parole. Murderers and sex offenders are often much less likely to commit another offense than someone guilty of a lesser crime — something that a gut assessment at a parole hearing likely wouldn’t speak to.


Ref: Prisons turn to computer algorithms for deciding who to parole – The Verge

80’s IBM Watson


IBM’s Watson supercomputer may be boning up on its medical bona fides, but the concept of Dr. Watson is nothing new. We’ve been waiting on our super-smart computer doctors of tomorrow for over 30 years.

The 1982 book World of Tomorrow: Health and Medicine by Neil Ardley showed kids of the 1980s what the doctor’s office of the future was going to look like. The room is filled with automatic diagnosis stations, prescription vending machines, and plenty of control panels sporting colorful buttons. The only thing that’s missing is, well, a doctor.

From the book:

A visit to the doctor in the future is likely to resemble a computer game, for computers will be greatly involved in medical care. Now doctors have to question and examine their patients to find out what is wrong with them. They compare the patients’ answers and the examination results with their own knowledge of medical conditions and illnesses. This enables doctors to decide on the causes of the patients’ problems.

Computers can store huge amounts of medical information. Doctors are therefore likely to use computers to help them find the causes of illnesses. The computer could take over completely, allowing doctors to concentrate on patients who need personal care.

The computer won’t just be a dumb machine that’s fed info. The robo-doctor of tomorrow will be able to ask questions of the patient, narrowing down all the possible things that could be wrong.

The computer will question the patient about an illness just as the doctor does now. It will either display words on a screen or speak to the patient, who will reply or operate a keyboard to answer. The questions will continue until the computer has either narrowed down the possible causes of the illness to one or needs more information that the patient cannot give by answering.

The patient will then go to a machine that checks his or her physical condition. It will measure such factors as pulse, temperature and blood pressure and maybe look into the interior of the patient’s body. The results will go to the computer. This may still not provide the computer with enough information about the patient, and it may need to take samples — for example, of blood or hair. It will do this painlessly.

Google’s Autocomplete – Negative Stereotyped Search


I am not implying the negative stereotyped search term suggestions about women are Google’s intent – I rather suspect a coordinated bunch of MRAs are to be blamed for the volume of said search terms – but that doesn’t mean Google is completely innocent. The question of accountability goes beyond a binary option of intentionality or complete innocence.

Unsurprisingly, Google doesn’t take any responsiblity. It puts the blame on its own algorithms… as if the algorithms were beyond the company’s control.

The Spiegel wrote (about another autocompletion affair):

The company maintains that the search engine only shows what exists. It’s not its fault, argues Google, if someone doesn’t like the computed results. […]
Google increasingly influences how we perceive the world. […] Contrary to what the Google spokesman suggests, the displayed search terms are by no means solely based on objective calculations. And even if that were the case, just because the search engine means no harm, it doesn’t mean that it does no harm.

If we, as a society, do not want negative stereotypes (be they sexist, racist, ablist or otherwise discriminatory) to prevail in Google’s autocompletion, where can we locate accountability? With the people who first asked stereotyping questions? With the people who asked next? Or with the people who accepted Google’s suggestion to search for the stereotyping questions instead of searching what they originally intended? What about Google itself?

Of course, algorithms imply automation. And digital literacy helps understanding the process of automatation – I have been saying this before – but Algorithms are more than a technological issue: they involve not only automated data analysis, but also decision-making (cf. “Governing Algorithms: A Provocation Piece” #21. No, actually you should not only read #21 but the whole, very thoughtprovoking provokation piece!). Which makes it impossible to ignore the question whether algorithms can be accountable.

In a recent Atlantic article, advocating reverse engineering, N. Diakopoulos asserts:

[…] given the growing power that algorithms wield in society it’s vital to continue to develop, codify, and teach more formalized methods of algorithmic accountability.

Which I think would be a great thing because, at the very least, this will raise awareness. (I don’t agree that “algorithmic accountability” can be assigned à priori, though). But when algorithms are not accountable, then who is? The people/organization/company creating them? The people/organization/company deploying them? Or the people/organization/company using them? This brings us back to the conclusion that the question of accountability goes beyond a binary option of intentionality or complete innocence… which makes the whole thing an extremely complex issue.

Who is in charge when algorithms are in charge?


Ref: Google’s autocompletion: algorithms, stereotypes and accountability – Sociostrategy
Ref: Google’s autocomplete spells out our darkest thoughts – The Guardian

Google Shopping Express


But the game goes deeper. As personal digital assistant apps such as Google Now become widespread, so does the idea of algorithms that can not only meet but anticipate our needs. Extend the concept from the purely digital into the realm of retail, and you have what some industry prognosticators are calling “ambient commerce.” In a sensor-rich future where not just phones but all kinds of objects are internet-connected, same-day delivery becomes just one component of a bigger instant gratification engine.

On the same day Google announced that its Shopping Express was available to all Bay Area residents, eBay Enterprise Marketing Solutions head of strategy John Sheldon was telling a roomful of clients that there will soon come a time when customers won’t be ordering stuff from eBay anymore. Instead, they’ll let their phones do it.

Sheldon believes the “internet of things” is creating a data-saturated environment that will soon envelope commerce. In a chat with WIRED, he describes a few hypothetical examples that sound like they’re already within reach. Imagine setting up a rule in Nike+, he says, to have the app order you a new pair of shoes after you run 300 miles. Or picture a bicycle helmet with a sensor that “knows” when a crash has happened, which prompts an app to order a new one.

Now consider an even more advanced scenario. A shirt has a sensor that detects moisture. And you find yourself stuck out in the rain without an umbrella. Not too many minutes after the downpour starts, a car pulls up alongside you. A courier steps out and hands you an umbrella — or possibly a rain jacket, depending on what rules you set up ahead of time for such a situation, perhaps using IFTTT.

“Ambient commerce is about consumers turning over their trust to the machine,” Sheldon says.


Ref: One Day, Google Will Deliver the Stuff You Want Before You Ask – Wired

The Ethics of Autonomous Cars – 2

If a small tree branch pokes out onto a highway and there’s no incoming traffic, we’d simply drift a little into the opposite lane and drive around it. But an automated car might come to a full stop, as it dutifully observes traffic laws that prohibit crossing a double-yellow line. This unexpected move would avoid bumping the object in front, but then cause a crash with the human drivers behind it.

Should we trust robotic cars to share our road, just because they are programmed to obey the law and avoid crashes?


Programmers still will need to instruct an automated car on how to act for the entire range of foreseeable scenarios, as well as lay down guiding principles for unforeseen scenarios. So programmers will need to confront this decision, even if we human drivers never have to in the real world. And it matters to the issue of responsibility and ethics whether an act was premeditated (as in the case of programming a robot car) or reflexively without any deliberation (as may be the case with human drivers in sudden crashes).

Anyway, there are many examples of car accidents every day that involve difficult choices, and robot cars will encounter at least those. For instance, if an animal darts in front of our moving car, we need to decide: whether it would be prudent to brake; if so, how hard to brake; whether to continue straight or swerve to the left of right; and so on. These decisions are influenced by environmental conditions (e.g., slippery road), obstacles on and off the road (e.g., other cars to the left and trees to the right), size of an obstacle (e.g., hitting a cow diminishes your survivability, compared to hitting a raccoon), second-order effects (e.g., crash with the car behind us, if we brake too hard), lives at risk in and outside the car (e.g., a baby passenger might mean the robot car should give greater weight to protecting its occupants), and so on.


In “robot ethics,” most of the attention so far has been focused on military drones. But cars are maybe the most iconic technology in America—forever changing cultural, economic, and political landscapes. They’ve made new forms of work possible and accelerated the pace of business, but they also waste our time in traffic. They rush countless patients to hospitals and deliver basic supplies to rural areas, but also continue to kill more than 30,000 people a year in the U.S. alone. They bring families closer together, but also farther away at the same time. They’re the reason we have suburbs, shopping malls, and fast-food restaurants, but also new environmental and social problems.


Ref: The Ethics of Autonomous Cars – TheAtlantic


The Ethics of Autonomous Cars



That’s how this puzzle relates to the non-identity problem posed by Oxford philosopher Derek Parfit in 1984. Suppose we face a policy choice of either depleting some natural resource or conserving it. By depleting it, we might raise the quality of life for people who currently exist, but we would decrease the quality of life for future generations; they would no longer have access to the same resource.

Say that the best we could do is make robot cars reduce traffic fatalities by 1,000 lives. That’s still pretty good. But if they did so by saving all 32,000 would-be victims while causing 31,000 entirely new victims, we wouldn’t be so quick to accept this trade — even if there’s a net savings of lives.


With this new set of victims, however, are we violating their right not to be killed? Not necessarily. If we view the right not to be killed as the right not to be an accident victim, well, no one has that right to begin with. We’re surrounded by both good luck and bad luck: accidents happen. (Even deontological – duty-based — or Kantian ethics could see this shift in the victim class as morally permissible given a non-violation of rights or duties, in addition to the consequentialist reasons based on numbers.)


Ethical dilemmas with robot cars aren’t just theoretical, and many new applied problems could arise: emergencies, abuse, theft, equipment failure, manual overrides, and many more that represent the spectrum of scenarios drivers currently face every day.

One of the most popular examples is the school-bus variant of the classic trolley problem in philosophy: On a narrow road, your robotic car detects an imminent head-on crash with a non-robotic vehicle — a school bus full of kids, or perhaps a carload of teenagers bent on playing “chicken” with you, knowing that your car is programmed to avoid crashes. Your car, naturally, swerves to avoid the crash, sending it into a ditch or a tree and killing you in the process.

At least with the bus, this is probably the right thing to do: to sacrifice yourself to save 30 or so schoolchildren. The automated car was stuck in a no-win situation and chose the lesser evil; it couldn’t plot a better solution than a human could.

But consider this: Do we now need a peek under the algorithmic hood before we purchase or ride in a robot car? Should the car’s crash-avoidance feature, and possible exploitations of it, be something explicitly disclosed to owners and their passengers — or even signaled to nearby pedestrians? Shouldn’t informed consent be required to operate or ride in something that may purposely cause our own deaths?

It’s one thing when you, the driver, makes a choice to sacrifice yourself. But it’s quite another for a machine to make that decision for you involuntarily.


Ref: The Ethics of Saving Lives With Autonomous Cars Are Far Murkier Than You Think – Wired
Ref: Ethics + Emerging Sciences Group