Death by Robot

Ronald Arkin, a roboticist at Georgia Tech, has received grants from the military to study how to equip robots with a set of moral rules. “My main goal is to reduce the number of noncombatant casualties in warfare,” he says. His lab developed what he calls an “ethical adapter” that helps the robot emulate guilt. It’s set in motion when the program detects a difference between how much destruction is expected when using a particular weapon and how much actually occurs. If the difference is too great, the robot’s guilt level reaches a certain threshold, and it stops using the weapon. Arkin says robots sometimes won’t be able to parse more complicated situations in which the right answer isn’t a simple shoot/don’t shoot decision. But on balance, he says, they will make fewer mistakes than humans, whose battlefield behavior is often clouded by panic, confusion or fear.

A robot’s lack of emotion is precisely what makes many people uncomfortable with the idea of trying to give it human characteristics. Death by robot is an undignified death, Peter Asaro, an affiliate scholar at the Center for Internet and Society at Stanford Law School, said in a speech in May at a United Nations conference on conventional weapons in Geneva. A machine “is not capable of considering the value of those human lives” that it is about to end, he told the group. “And if they’re not capable of that and we allow them to kill people under the law, then we all lose dignity, in the way that if we permit slavery, it’s not just the suffering of those whoareslaves butallof humanity that suffers the indignity that there are any slaves at all.” The U.N.willtakeupquestions about the uses of autonomous weapons again in April.

 

Ref: Death by Robot – NY Times

Self-driving cars: safer, but what of their morals

It’s relatively easy to write computer code that directs the car how to respond to a sudden dilemma. The hard part is deciding what that response should be.

“The problem is,who’sdeterminingwhatwewant?” asks Jeffrey Miller, a University of Southern Californiaprofessorwhodevelopsdriverlessvehiclesoftware. “You’re not going to have 100 percent buy-in that says, ‘Hit the guy on the right.'”

Companiesthataretestingdriverlesscarsarenotfocusingon these moral questions.

Thecompanymostaggressivelydevelopingself-drivingcarsisn’tacarmakeratall. Google has invested heavily in the technology, driving hundreds of thousands of miles on roads and highways in tricked-out Priuses and Lexus SUVs. Leaders at the Silicon Valley giant have said they want to get the technology to the public by 2017.

For now, Google is focused on mastering the most common driving scenarios, programming the cars to drive defensively in hopes of avoiding the rare instances when an accident is truly unavoidable.

“People are philosophizing about it, but the question about real-world capability and real-world events that can affect us, we really haven’t studied that issue,” said Ron Medford, the director of safety for Google’s self-driving car project.

[…]

Technological advances will only add to the complexity. Especially when in-car sensors become so acute they can, for example, differentiate between a motorcyclist wearing a helmet and a companion riding without one. If a collision is inevitable, should the car hit the person with a helmet because the injury risk might be less? But that would penalize the person who took extra precautions.

Lin said he has discussed the ethics of driverlesscarswithGoogleas well as automakers includingTesla, Nissan and BMW. As far as he knows, only BMW has formed an internal group to study the issue.

Uwe Higgen, head of BMW’s group technology office in Silicon Valley, said the automaker has brought together specialists in technology, ethics, social impact, and the law to discuss a range of issues related to carsthatdoever-moredrivinginsteadof people.

“This is a constant process going forward,” Higgen said.

 

Ref: Self-driving cars: safer, but what of their morals – HuffingtonPost

2014: A year of progress (Stop Killer Robots)

Spurred on by the campaign’s non-governmental organizations (NGOs) as well as by think tanks and academics, 2014 saw notable diplomatic progress and increased awareness in capitals around the world of the challenges posed by autonomous warfare,buttherewerefewsignalsthatnationalpolicyisanycloser to being developed. Only two nations have stated policy on autonomous weapons systems: a 2012 US Department of Defense directive permits the development and use of fully autonomous systems that deliver only non-lethal force, while the UK MinistryofDefencehasstated that it has “no plans to replace skilled military personnel with fully autonomous systems.”

Five nations—Cuba, Ecuador, Egypt, Pakistan, and the Holy See—have expressed support for the objective of a preemptive ban on fully autonomous weapons, but have yet to execute that commitment in law or policy. A number of nations have indicated support for the principle of human control over the selection of targets and use of force, indicating they see a need to draw the line at some point.

[…]

The year opened with a resolution by the European Parliament on 27 February on the use of armed drones that included a call to “ban the development, production and use of fully autonomous weapons which enable strikes to be carried out without human intervention.” Sponsored by the Greens/European Free Alliance group of Members of the European Parliament with cross-party support, the resolution was adopted by a vote of 534–49.

The first informal CCW meeting of experts held at the United Nations (UN) in Geneva on 13-16 May attracted “record attendance” with the participation of 86 states, UN agencies, the ICRC, and the Campaign to Stop Killer Robots. The campaign’s delegation contributed actively throughout the meeting, making statements in plenary, issuing briefing papers and reports, hosting four consecutive side events, and briefing media throughout. The chair and vice-chair of the International Committee for Robot Arms Control (ICRAC) gave expert presentations at the meeting, which ICRAC had urged be convened since 2009.

The 2014 experts meeting reviewed technical, legal, ethical, and operational questions relating to the emerging technology of lethal autonomous weapons systems, but did not take any decisions. Ambassador Jean-Hugues Simon-Michel of France provided a report of the meeting in his capacity as chair that summarized the main areas of interest and recommended further talks in 2015.

[…]

The report notes how experts and delegations described the potential for autonomous weapons systems to be “game changers” in military affairs,butobservedthereappearedtobelittle military interest in deploying fully autonomous weapons systems because of the need to retain human control and concerns over operationalrisks includingvulnerability to cyber attacks, lack of predictability, difficulties of adapting to a complex environment, and challenges of interoperability. Delegates also considered proliferation and the potential impact of autonomous weapons on international peace and security.

Delegates considered the impact of development of autonomous weapons systems on human dignity, highlighting the devolution of life and death decisions to a machine as a key ethical concern. Some asked if a machine could acquire capacities of moral reasoning and human judgment, which is the basis for respect of international humanitarian law principles and challenged the capacity of machine to respond to a moral dilemma.

There was acknowledgment that international humanitarian and human rights law applies to all new weapons but views were divided as to whether the weapons would be illegal under existing law or permitted in certain circumstances. The imperative of maintaining meaningful human control over targeting and attack decisions emerged as the primary point of common ground at the meeting.

[…]

Campaign representatives participated in discussions on autonomous weapons in 2014 convened by the Geneva Academy of International Humanitarian Law and Human Rights, which issued a briefing paper in Novemberonlegaldimensionsoftheissue, as well asattheWashingtonDC-based Center for New American Security, which began a project on “ethical autonomy” in 2014. Campaigners spoke at numerous academic events this year, including at Oxford University, University of California-Santa Barbara, and University of Pennsylvania Law School. They also presented at events convened by think tanks often in cooperation with government, such astheEUNon-ProliferationConsortium in Brussels and the UN-South Korea non-proliferation forum on Jeju Island. The campaign features in a Stockholm International Peace Research Institute (SIPRI) chapter on the “governance of autonomous weapons” included for the first time in the 2014 Yearbook edition.

Ref: 2014: A year of progress – Stop Killer Robots

How the Pentagon’s Skynet Would Automate War

Due to technological revolutions outside its control, the Department of Defense (DoD) anticipates the dawn of a bold new era of automated war within just 15 years. By then, they believe, wars could be fought entirely using intelligent robotic systems armed with advanced weapons.

Last week, US defense secretary Chuck Hagel announced the ‘Defense Innovation Initiative’—a sweeping plan to identify and develop cutting edge technology breakthroughs “over the next three to five years and beyond” to maintain global US “military-technological superiority.” Areas to be covered by the DoDprogrammeinclude robotics, autonomous systems,miniaturization, Big Data and advanced manufacturing, including 3D printing.

[…]

A key area emphasized by the Wells and Kadtke study is improving the US intelligence community’s ability to automatically analyze vast data sets without the need for human involvement.

Pointing out that “sensitive personal information” can now be easily mined from online sources and social media, they call for policies on “Personally Identifiable Information (PII) to determine the Department’s ability to make use of information from social media in domestic contingencies”—in other words, to determine under what conditions the Pentagon can use private information on American citizens obtained via data-mining of Facebook, Twitter, LinkedIn, Flickr and so on.

Their study argues that DoD can leverage “large-scale data collection” for medicine and society, through “monitoring of individuals and populations using sensors, wearable devices, and IoT [the ‘Internet of Things’]” which together “will provide detection and predictive analytics.” The Pentagon can build capacity for this “in partnership with large private sector providers, where the most innovative solutions are currently developing.”

[…]

Within this context of Big Data and cloud robotics, Kadtke and Wells enthuse that as unmanned robotic systems become more intelligent, the cheap manufacture of “armies of Kill Bots that can autonomously wage war” will soon be a reality. Robots could also become embedded in civilian life to perform “surveillance, infrastructure monitoring,policetelepresence, and homeland security applications.”

[…]

Perhaps the most disturbing dimension among the NDU study’s insights is the prospect that within the next decade, artificial intelligence (AI) research could spawn “strong AI”—or at least a form of “weak AI” that approximates some features of the former.

Strong AI should be able to simulate a wide range of human cognition, and include traits like consciousness, sentience, sapience, or self-awareness. Many now believe, Kadtke and Wells, observe, that “strong AI may be achieved sometime in the 2020s.”

[…]

Nearly half the people on the US government’s terrorism watch list of “known or suspected terrorists” have “no recognized terrorist group affiliation,” and more than half the victims of CIA drone-strikes over a single year were “assessed” as “Afghan, Pakistani and unknown extremists”—among others who were merely “suspected, associated with, or who probably” belonged to unidentified militant groups. Multiple studies show that a substantive number of drone strike victims are civilians—and a secret Obama administration memo released this summer under Freedom of Information reveals that thedroneprogramme authorizes the killing of civilians as inevitable collateral damage.

Indeed, flawed assumptions in the Pentagon’s classification systems for threat assessment mean that even “nonviolent political activists” might be conflated with potential ‘extremists’, who “support political violence” and thus pose a threat to US interests.

 

Ref: How the Pentagon’s Skynet Would Automate War – Motherboard

AI Has Arrived, and That Really Worries the World’s Brightest Minds

Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.

AI problems that seemed nearly unassailable justafewyears ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselvestoidentifycat videos. Robot dogs can now walk very much like their living counterparts.

“Things like computer vision are starting to work; speech recognition is starting to work There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AIethicistwho was at the event with Musk. “And that’s making it more urgent to look at this issue.”

Given this rapid clip, Musk and others are calling on those building these products to carefully consider the ethical implications. At the Puerto Rico conference, delegates signed an open letter pledging to conduct AI research for good, while “avoiding potential pitfalls.” Musk signed the letter too. “Here are all these leading AI researchers saying that AI safety is important,” Musk said yesterday. “I agree with them.”

[…]

Deciding the dosanddon’tsofscientificresearch is the kind of baseline ethical work that molecular biologists did during the 1975 Asilomar Conference on Recombinant DNA, where they agreed on safety standards designedtopreventmanmade genetically modified organisms from posing a threat to the public. The Asilomar conference had a much more concrete result than the Puerto Rico AI confab.

 

Ref: AI Has Arrived, and That Really Worries the World’s Brightest Minds – Wired