Category Archives: T – ethics

Death by Robot

Ronald Arkin, a roboticist at Georgia Tech, has received grants from the military to study how to equip robots with a set of moral rules. “My main goal is to reduce the number of noncombatant casualties in warfare,” he says. His lab developed what he calls an “ethical adapter” that helps the robot emulate guilt. It’s set in motion when the program detects a difference between how much destruction is expected when using a particular weapon and how much actually occurs. If the difference is too great, the robot’s guilt level reaches a certain threshold, and it stops using the weapon. Arkin says robots sometimes won’t be able to parse more complicated situations in which the right answer isn’t a simple shoot/don’t shoot decision. But on balance, he says, they will make fewer mistakes than humans, whose battlefield behavior is often clouded by panic, confusion or fear.

A robot’s lack of emotion is precisely what makes many people uncomfortable with the idea of trying to give it human characteristics. Death by robot is an undignified death, Peter Asaro, an affiliate scholar at the Center for Internet and Society at Stanford Law School, said in a speech in May at a United Nations conference on conventional weapons in Geneva. A machine “is not capable of considering the value of those human lives” that it is about to end, he told the group. “And if they’re not capable of that and we allow them to kill people under the law, then we all lose dignity, in the way that if we permit slavery, it’s not just the suffering of those whoareslaves butallof humanity that suffers the indignity that there are any slaves at all.” The U.N.willtakeupquestions about the uses of autonomous weapons again in April.

 

Ref: Death by Robot – NY Times

Self-driving cars: safer, but what of their morals

It’s relatively easy to write computer code that directs the car how to respond to a sudden dilemma. The hard part is deciding what that response should be.

“The problem is,who’sdeterminingwhatwewant?” asks Jeffrey Miller, a University of Southern Californiaprofessorwhodevelopsdriverlessvehiclesoftware. “You’re not going to have 100 percent buy-in that says, ‘Hit the guy on the right.'”

Companiesthataretestingdriverlesscarsarenotfocusingon these moral questions.

Thecompanymostaggressivelydevelopingself-drivingcarsisn’tacarmakeratall. Google has invested heavily in the technology, driving hundreds of thousands of miles on roads and highways in tricked-out Priuses and Lexus SUVs. Leaders at the Silicon Valley giant have said they want to get the technology to the public by 2017.

For now, Google is focused on mastering the most common driving scenarios, programming the cars to drive defensively in hopes of avoiding the rare instances when an accident is truly unavoidable.

“People are philosophizing about it, but the question about real-world capability and real-world events that can affect us, we really haven’t studied that issue,” said Ron Medford, the director of safety for Google’s self-driving car project.

[…]

Technological advances will only add to the complexity. Especially when in-car sensors become so acute they can, for example, differentiate between a motorcyclist wearing a helmet and a companion riding without one. If a collision is inevitable, should the car hit the person with a helmet because the injury risk might be less? But that would penalize the person who took extra precautions.

Lin said he has discussed the ethics of driverlesscarswithGoogleas well as automakers includingTesla, Nissan and BMW. As far as he knows, only BMW has formed an internal group to study the issue.

Uwe Higgen, head of BMW’s group technology office in Silicon Valley, said the automaker has brought together specialists in technology, ethics, social impact, and the law to discuss a range of issues related to carsthatdoever-moredrivinginsteadof people.

“This is a constant process going forward,” Higgen said.

 

Ref: Self-driving cars: safer, but what of their morals – HuffingtonPost

How the Pentagon’s Skynet Would Automate War

Due to technological revolutions outside its control, the Department of Defense (DoD) anticipates the dawn of a bold new era of automated war within just 15 years. By then, they believe, wars could be fought entirely using intelligent robotic systems armed with advanced weapons.

Last week, US defense secretary Chuck Hagel announced the ‘Defense Innovation Initiative’—a sweeping plan to identify and develop cutting edge technology breakthroughs “over the next three to five years and beyond” to maintain global US “military-technological superiority.” Areas to be covered by the DoDprogrammeinclude robotics, autonomous systems,miniaturization, Big Data and advanced manufacturing, including 3D printing.

[…]

A key area emphasized by the Wells and Kadtke study is improving the US intelligence community’s ability to automatically analyze vast data sets without the need for human involvement.

Pointing out that “sensitive personal information” can now be easily mined from online sources and social media, they call for policies on “Personally Identifiable Information (PII) to determine the Department’s ability to make use of information from social media in domestic contingencies”—in other words, to determine under what conditions the Pentagon can use private information on American citizens obtained via data-mining of Facebook, Twitter, LinkedIn, Flickr and so on.

Their study argues that DoD can leverage “large-scale data collection” for medicine and society, through “monitoring of individuals and populations using sensors, wearable devices, and IoT [the ‘Internet of Things’]” which together “will provide detection and predictive analytics.” The Pentagon can build capacity for this “in partnership with large private sector providers, where the most innovative solutions are currently developing.”

[…]

Within this context of Big Data and cloud robotics, Kadtke and Wells enthuse that as unmanned robotic systems become more intelligent, the cheap manufacture of “armies of Kill Bots that can autonomously wage war” will soon be a reality. Robots could also become embedded in civilian life to perform “surveillance, infrastructure monitoring,policetelepresence, and homeland security applications.”

[…]

Perhaps the most disturbing dimension among the NDU study’s insights is the prospect that within the next decade, artificial intelligence (AI) research could spawn “strong AI”—or at least a form of “weak AI” that approximates some features of the former.

Strong AI should be able to simulate a wide range of human cognition, and include traits like consciousness, sentience, sapience, or self-awareness. Many now believe, Kadtke and Wells, observe, that “strong AI may be achieved sometime in the 2020s.”

[…]

Nearly half the people on the US government’s terrorism watch list of “known or suspected terrorists” have “no recognized terrorist group affiliation,” and more than half the victims of CIA drone-strikes over a single year were “assessed” as “Afghan, Pakistani and unknown extremists”—among others who were merely “suspected, associated with, or who probably” belonged to unidentified militant groups. Multiple studies show that a substantive number of drone strike victims are civilians—and a secret Obama administration memo released this summer under Freedom of Information reveals that thedroneprogramme authorizes the killing of civilians as inevitable collateral damage.

Indeed, flawed assumptions in the Pentagon’s classification systems for threat assessment mean that even “nonviolent political activists” might be conflated with potential ‘extremists’, who “support political violence” and thus pose a threat to US interests.

 

Ref: How the Pentagon’s Skynet Would Automate War – Motherboard

Stanford to host 100-year study on AI

Stanford University has invited leading thinkers from several institutions to begin a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.

[…]

“Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

[…]

“I’m very optimistic about the future and see great value ahead for humanity with advances in systems that can perceive, learn and reason,” said Horvitz, a distinguished scientist and managing director at Microsoft Research, who initiated AI100 as a private philanthropic initiative. “However, it is difficult to anticipate all of the opportunities and issues, so we need to create an enduring process.”

 

Ref: Stanford to host 100-year study on artificial intelligence – Stanford News

Algorithms Are Great and All, But They Can Also Ruin Lives

On April 5, 2011, 41-year-old John Gass received a letter from the Massachusetts Registry of Motor Vehicles. The letter informed Gass that his driver’s license had been revoked and that he should stop driving, effective immediately. The only problem was that, as a conscientious driver who had not received so much as a traffic violation in years, Gass had no idea why it had been sent.

After several frantic phone calls, followed up by a hearing with Registry officials, he learned the reason: his image had been automatically flagged by a facial-recognition algorithm designed to scan through a database of millions of state driver’s licenses looking for potential criminal false identities. The algorithm had determined that Gass looked sufficiently like another Massachusetts driver that foul play was likely involved—and the automated letter from the Registry of Motor Vehicles was the end result.

The RMV itself was unsympathetic, claiming that it was the accused individual’s “burden” to clear his or her name in the event of any mistakes, and arguing that the pros of protecting the public far outweighed the inconvenience to the wrongly targeted few.

John Gass is hardly alone in being a victim of algorithms gone awry. In 2007, a glitch in the California Department of Health Services’ new automated computer system terminated the benefits of thousands of low-income seniors and people with disabilities. Without their premiums paid, Medicare canceled those citizens’ health care coverage.

[…]

Equally alarming is the possibility that an algorithm may falsely profile an individual as a terrorist: a fate that befalls roughly 1,500 unlucky airline travelers each week. Those fingered in the past as the result of data-matching errors include former Army majors, a four-year-old boy, and an American Airlines pilot—who was detained 80 times over the course of a single year.

[…]

“We are all so scared of human bias and inconsistency,” says Danielle Citron, professor of law at the University of Maryland. “At the same time, we are overconfident about what it is that computers can do.”

The mistake, Citron suggests, is that we “trust algorithms, because we think of them as objective, whereas the reality is that humans craft those algorithms and can embed in them all sorts of biases and perspectives.” To put it another way, a computer algorithm might be unbiased in its execution, but, as noted, this does not mean that there is not bias encoded within it.

 

Ref: Algorithms Are Great and All, But They Can Also Ruin Lives – Wired

When Will We Let Go and Let Google Drive Us?

According to Templeton, regulators and policymakers are proving more open to the idea than expected—a number of US states have okayed early driverless cars for public experimentation, along with Singapore, India, Israel, and Japan—but earning the general public’s trust may be a more difficult battle to win.

No matter how many fewer accidents occur due to driverless cars, there may well be a threshold past which we still irrationally choose human drivers over them. That is, we may hold robots to a much higher standard than humans.

This higher standard comes at a price. “People don’t want to be killed by robots,” Templeton said. “They want to be killed by drunks.”

It’s an interesting point—assuming the accident rate is nonzero (and it will be), how many accidents are we willing to tolerate in driverless cars, and is that number significantly lower than the number we’re willing to tolerate with human drivers?

Let’s say robot cars are shown to reduce accidents by 20%. They could potentially prevent some 240,000 accidents (using Templeton’s global number). That’s a big deal. And yet if (fully) employed, they would still cause nearly a million accidents a year. Who would trust them? And at what point does that trust kick in? How close to zero accidents does it have to get?

And it may turn out that the root of the problem lies not with the technology but us.

Ref: Summit Europe: When Will We Let Go and Let Google Drive Us? – SingularityHub

Killer Robots

One of them is the Skunk, designed for crowd control. It can douse demonstrators with teargas.

“There could be a dignity issue here; being herded by drones would be like herding cattle,” he said.

But at least drones have a human at the controls.

“[Otherwise] you are giving [the power of life and death] to a machine,” said Heyns. “Normally, there is a human being to hold accountable.

“If it’s a robot, you can put it in jail until its batteries run flat but that’s not much of a punishment.”

Heyns said the advent of the new generation of weapons made it necessary for laws to be introduced that would prohibit the use of systems that could be operated without a significant level of human control.

“Technology is a tool and it should remain a tool, but it is a dangerous tool and should be held under scrutiny. We need to try to define the elements of needful human control,” he said.

Several organisations have voiced concerns about autonomous weapons. The Campaign to Stop Killer Robots wants a ban on fully autonomous weapons.

 

Ref: Stop Killer Robots While we Can – Time Live