Google and Elon Musk to Decide What Is Good for Humanity

THE RECENTLY PUBLISHED Future of Life Institute (FLI) letter “Research Priorities for Robust and Beneficial Artificial Intelligence,” signed by hundreds of AI researchers in addition to Elon Musk and Stephen Hawking, many representing government regulators and some sitting on committees with names like “Presidential Panel on Long Term AI future,” offers a program professing to protect the mankind from the threat of “super-intelligent AIs.”

[…]

Which brings me back to the FLI letter. While individual investors have every right to lose their assets, the problem gets much more complicated when government regulators are involved. Here are the the main claims of the letter I have a problem with (quotes from the letter in italics):

– Statements like: “There is a broad consensus that AI research is progressing steadily,” even “progressing dramatically” (Google Brain signatories on FLI web site), are just not true. In the last 50 years there has been very little AI progress (more stasis like than “steady”) and not a single major AI based breakthrough commercial product, unless you count iPhone’s infamous Siri. In short, despite the overwhelming media push, AI simply does not work.

– “AI systems must do what we want them to do” begs the question of who is “we?” There are 92 references included in this letter, all of them from CS, AI and political scientists, there are many references to approaching, civilization threatening “singularity,” several references to possibilities for “mind uploading,” but not a single reference from a biologist or a neural scientist. To call such an approach to study of intellect “interdisciplinary” is just not credible.

– “Identify research directions that can maximize societal benefits” is outright chilling. Again, who decides whether research is “socially desirable?”

– “AI super-intelligence will not act with human wishes and will threaten the humanity” is just a cover for justification of the attempted power grab of AI group over the competing approaches to study of intellect.

[…]

AI researchers, on the other hand, start with the a priori assumption that the brain is quite simple, really just a carbon version of a Von Neumann CPU. As Google Brain AI researcher and FLI letter signatory, Illya Sutskever, recently told me, “[The] brain absolutely is just a CPU and further study of brain would be a waste of my time.” This is almost word for word repetition of famous statement of Noam Chomsky made decades ago “predicting” the existence of a language “generator” in the brain.

FLI letter signatories say: Do not to worry, “we” will allow “good” AI and “identify research directions” in order to maximize societal benefits and eradicate diseases and poverty. I believe that it would be precisely the newly emerging neural science groups which would suffer if AI is allowed to regulate research direction in this field. Why should “evidence” like this allow AI scientists to control what biologists and neural scientists can and cannot do?

Ref: Google and Elon Musk to Decide What Is Good for Humanity – Wired

What Crazy Dash Cam Videos Teach Us About Self-Driving Cars

THE FIRST SELF-DRIVING CARS are expected to hit showrooms within five years. Their autonomous capabilities will be largely limited to highways, where there aren’t things like pedestrians and cyclists to deal with, and you won’t fully cede control. As long as the road is clear, the car’s in charge. But when all that computing power senses trouble, like construction or rough weather, it will have you take the wheel.

The problem is, that switch will not—because it cannot—happen immediately.

The primary benefit of autonomous technology is to increase safety and decrease congestion. A secondary upside to letting the car do the driving is letting you can focus on crafting pithy tweets, texting, or do anything else you’d rather be doing. And while any rules the feds concoct likely will prohibit catching Zs behind the wheel, there’s no arguing that someone won’t try it.

Audi’s testing has shown it takes an average of 3 to 7 seconds—and as long as 10—for a driver to snap to attention and take control, even when prompted by flashing lights and verbal warnings. This means engineers must ensure an autonomous Audi can handle any situation for at least that long. This is not insignificant, because a lot can happen in 10 seconds, especially when a vehicle is moving more than 100 feet per second.

[…]

The point is, the world’s highways are a crazy, unpredictable place where anything can happen. And they don’t even have the pedestrians and cyclists and buses and taxis and delivery vans and countless other things that make autonomous driving in an urban setting so tricky. So how do you prepare for every situation imaginable?

Ref: What Crazy Dash Cam Videos Teach Us About Self-Driving Cars  – Wired

The Ethical Dangers of AI

The AI community has begun to take the downside risk of AI very seriously. I attended a Future of AI workshop in January of 2015 in Puerto Rico sponsored by the Future of Life Institute. The ethical consequences of AI were front and center. There are four key thrusts the AI community is focusing research on to get better outcomes with future AIs:

Verification – Research into methods of guaranteeing that the systems we build actually meet the specifications we set.

Validation – Research into ensuring that the specifications, even if met, do not result in unwanted behaviors and consequences.

Security – Research on building systems that are increasingly difficult to tamper with – internally or externally.

Control – Research to ensure that we can interrupt AI systems (even with other AIs) if and when  something goes wrong, and get them back on track.

These aren’t just philosophical or ethical considerations, they are system design issues. I think we’ll see a greater focus on these kinds of issues not just in AI, but in software generally as we develop systems with more power and complexity.

Will AIs ever be completely risk free? I don’t think so. Humans are not risk free! There is a predator/prey aspect to this in terms of malicious groups who choose to develop these technologies in harmful ways. However, the vast majority of people, including researchers and developers in AI, are not malicious. Most of the world’s intellect and energy will be spent on building society up, not tearing it down. In spite of this, we need to do a better job anticipating the potential consequences of our technologies, and being proactive about creating the outcomes that improve human health and the environment. That is a particular challenge with AI technology that can improve itself. Meeting this challenge will make it much more likely that we can succeed in reaching for the stars.

Ref: Interview: Neil Jacobstein Discusses Future of Jobs, Universal Basic Income and the Ethical Dangers of AI – SingularityHub

Death by Robot

Ronald Arkin, a roboticist at Georgia Tech, has received grants from the military to study how to equip robots with a set of moral rules. “My main goal is to reduce the number of noncombatant casualties in warfare,” he says. His lab developed what he calls an “ethical adapter” that helps the robot emulate guilt. It’s set in motion when the program detects a difference between how much destruction is expected when using a particular weapon and how much actually occurs. If the difference is too great, the robot’s guilt level reaches a certain threshold, and it stops using the weapon. Arkin says robots sometimes won’t be able to parse more complicated situations in which the right answer isn’t a simple shoot/don’t shoot decision. But on balance, he says, they will make fewer mistakes than humans, whose battlefield behavior is often clouded by panic, confusion or fear.

A robot’s lack of emotion is precisely what makes many people uncomfortable with the idea of trying to give it human characteristics. Death by robot is an undignified death, Peter Asaro, an affiliate scholar at the Center for Internet and Society at Stanford Law School, said in a speech in May at a United Nations conference on conventional weapons in Geneva. A machine “is not capable of considering the value of those human lives” that it is about to end, he told the group. “And if they’re not capable of that and we allow them to kill people under the law, then we all lose dignity, in the way that if we permit slavery, it’s not just the suffering of those whoareslaves butallof humanity that suffers the indignity that there are any slaves at all.” The U.N.willtakeupquestions about the uses of autonomous weapons again in April.

 

Ref: Death by Robot – NY Times

Self-driving cars: safer, but what of their morals

It’s relatively easy to write computer code that directs the car how to respond to a sudden dilemma. The hard part is deciding what that response should be.

“The problem is,who’sdeterminingwhatwewant?” asks Jeffrey Miller, a University of Southern Californiaprofessorwhodevelopsdriverlessvehiclesoftware. “You’re not going to have 100 percent buy-in that says, ‘Hit the guy on the right.'”

Companiesthataretestingdriverlesscarsarenotfocusingon these moral questions.

Thecompanymostaggressivelydevelopingself-drivingcarsisn’tacarmakeratall. Google has invested heavily in the technology, driving hundreds of thousands of miles on roads and highways in tricked-out Priuses and Lexus SUVs. Leaders at the Silicon Valley giant have said they want to get the technology to the public by 2017.

For now, Google is focused on mastering the most common driving scenarios, programming the cars to drive defensively in hopes of avoiding the rare instances when an accident is truly unavoidable.

“People are philosophizing about it, but the question about real-world capability and real-world events that can affect us, we really haven’t studied that issue,” said Ron Medford, the director of safety for Google’s self-driving car project.

[…]

Technological advances will only add to the complexity. Especially when in-car sensors become so acute they can, for example, differentiate between a motorcyclist wearing a helmet and a companion riding without one. If a collision is inevitable, should the car hit the person with a helmet because the injury risk might be less? But that would penalize the person who took extra precautions.

Lin said he has discussed the ethics of driverlesscarswithGoogleas well as automakers includingTesla, Nissan and BMW. As far as he knows, only BMW has formed an internal group to study the issue.

Uwe Higgen, head of BMW’s group technology office in Silicon Valley, said the automaker has brought together specialists in technology, ethics, social impact, and the law to discuss a range of issues related to carsthatdoever-moredrivinginsteadof people.

“This is a constant process going forward,” Higgen said.

 

Ref: Self-driving cars: safer, but what of their morals – HuffingtonPost

2014: A year of progress (Stop Killer Robots)

Spurred on by the campaign’s non-governmental organizations (NGOs) as well as by think tanks and academics, 2014 saw notable diplomatic progress and increased awareness in capitals around the world of the challenges posed by autonomous warfare,buttherewerefewsignalsthatnationalpolicyisanycloser to being developed. Only two nations have stated policy on autonomous weapons systems: a 2012 US Department of Defense directive permits the development and use of fully autonomous systems that deliver only non-lethal force, while the UK MinistryofDefencehasstated that it has “no plans to replace skilled military personnel with fully autonomous systems.”

Five nations—Cuba, Ecuador, Egypt, Pakistan, and the Holy See—have expressed support for the objective of a preemptive ban on fully autonomous weapons, but have yet to execute that commitment in law or policy. A number of nations have indicated support for the principle of human control over the selection of targets and use of force, indicating they see a need to draw the line at some point.

[…]

The year opened with a resolution by the European Parliament on 27 February on the use of armed drones that included a call to “ban the development, production and use of fully autonomous weapons which enable strikes to be carried out without human intervention.” Sponsored by the Greens/European Free Alliance group of Members of the European Parliament with cross-party support, the resolution was adopted by a vote of 534–49.

The first informal CCW meeting of experts held at the United Nations (UN) in Geneva on 13-16 May attracted “record attendance” with the participation of 86 states, UN agencies, the ICRC, and the Campaign to Stop Killer Robots. The campaign’s delegation contributed actively throughout the meeting, making statements in plenary, issuing briefing papers and reports, hosting four consecutive side events, and briefing media throughout. The chair and vice-chair of the International Committee for Robot Arms Control (ICRAC) gave expert presentations at the meeting, which ICRAC had urged be convened since 2009.

The 2014 experts meeting reviewed technical, legal, ethical, and operational questions relating to the emerging technology of lethal autonomous weapons systems, but did not take any decisions. Ambassador Jean-Hugues Simon-Michel of France provided a report of the meeting in his capacity as chair that summarized the main areas of interest and recommended further talks in 2015.

[…]

The report notes how experts and delegations described the potential for autonomous weapons systems to be “game changers” in military affairs,butobservedthereappearedtobelittle military interest in deploying fully autonomous weapons systems because of the need to retain human control and concerns over operationalrisks includingvulnerability to cyber attacks, lack of predictability, difficulties of adapting to a complex environment, and challenges of interoperability. Delegates also considered proliferation and the potential impact of autonomous weapons on international peace and security.

Delegates considered the impact of development of autonomous weapons systems on human dignity, highlighting the devolution of life and death decisions to a machine as a key ethical concern. Some asked if a machine could acquire capacities of moral reasoning and human judgment, which is the basis for respect of international humanitarian law principles and challenged the capacity of machine to respond to a moral dilemma.

There was acknowledgment that international humanitarian and human rights law applies to all new weapons but views were divided as to whether the weapons would be illegal under existing law or permitted in certain circumstances. The imperative of maintaining meaningful human control over targeting and attack decisions emerged as the primary point of common ground at the meeting.

[…]

Campaign representatives participated in discussions on autonomous weapons in 2014 convened by the Geneva Academy of International Humanitarian Law and Human Rights, which issued a briefing paper in Novemberonlegaldimensionsoftheissue, as well asattheWashingtonDC-based Center for New American Security, which began a project on “ethical autonomy” in 2014. Campaigners spoke at numerous academic events this year, including at Oxford University, University of California-Santa Barbara, and University of Pennsylvania Law School. They also presented at events convened by think tanks often in cooperation with government, such astheEUNon-ProliferationConsortium in Brussels and the UN-South Korea non-proliferation forum on Jeju Island. The campaign features in a Stockholm International Peace Research Institute (SIPRI) chapter on the “governance of autonomous weapons” included for the first time in the 2014 Yearbook edition.

Ref: 2014: A year of progress – Stop Killer Robots

How the Pentagon’s Skynet Would Automate War

Due to technological revolutions outside its control, the Department of Defense (DoD) anticipates the dawn of a bold new era of automated war within just 15 years. By then, they believe, wars could be fought entirely using intelligent robotic systems armed with advanced weapons.

Last week, US defense secretary Chuck Hagel announced the ‘Defense Innovation Initiative’—a sweeping plan to identify and develop cutting edge technology breakthroughs “over the next three to five years and beyond” to maintain global US “military-technological superiority.” Areas to be covered by the DoDprogrammeinclude robotics, autonomous systems,miniaturization, Big Data and advanced manufacturing, including 3D printing.

[…]

A key area emphasized by the Wells and Kadtke study is improving the US intelligence community’s ability to automatically analyze vast data sets without the need for human involvement.

Pointing out that “sensitive personal information” can now be easily mined from online sources and social media, they call for policies on “Personally Identifiable Information (PII) to determine the Department’s ability to make use of information from social media in domestic contingencies”—in other words, to determine under what conditions the Pentagon can use private information on American citizens obtained via data-mining of Facebook, Twitter, LinkedIn, Flickr and so on.

Their study argues that DoD can leverage “large-scale data collection” for medicine and society, through “monitoring of individuals and populations using sensors, wearable devices, and IoT [the ‘Internet of Things’]” which together “will provide detection and predictive analytics.” The Pentagon can build capacity for this “in partnership with large private sector providers, where the most innovative solutions are currently developing.”

[…]

Within this context of Big Data and cloud robotics, Kadtke and Wells enthuse that as unmanned robotic systems become more intelligent, the cheap manufacture of “armies of Kill Bots that can autonomously wage war” will soon be a reality. Robots could also become embedded in civilian life to perform “surveillance, infrastructure monitoring,policetelepresence, and homeland security applications.”

[…]

Perhaps the most disturbing dimension among the NDU study’s insights is the prospect that within the next decade, artificial intelligence (AI) research could spawn “strong AI”—or at least a form of “weak AI” that approximates some features of the former.

Strong AI should be able to simulate a wide range of human cognition, and include traits like consciousness, sentience, sapience, or self-awareness. Many now believe, Kadtke and Wells, observe, that “strong AI may be achieved sometime in the 2020s.”

[…]

Nearly half the people on the US government’s terrorism watch list of “known or suspected terrorists” have “no recognized terrorist group affiliation,” and more than half the victims of CIA drone-strikes over a single year were “assessed” as “Afghan, Pakistani and unknown extremists”—among others who were merely “suspected, associated with, or who probably” belonged to unidentified militant groups. Multiple studies show that a substantive number of drone strike victims are civilians—and a secret Obama administration memo released this summer under Freedom of Information reveals that thedroneprogramme authorizes the killing of civilians as inevitable collateral damage.

Indeed, flawed assumptions in the Pentagon’s classification systems for threat assessment mean that even “nonviolent political activists” might be conflated with potential ‘extremists’, who “support political violence” and thus pose a threat to US interests.

 

Ref: How the Pentagon’s Skynet Would Automate War – Motherboard

AI Has Arrived, and That Really Worries the World’s Brightest Minds

Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.

AI problems that seemed nearly unassailable justafewyears ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselvestoidentifycat videos. Robot dogs can now walk very much like their living counterparts.

“Things like computer vision are starting to work; speech recognition is starting to work There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AIethicistwho was at the event with Musk. “And that’s making it more urgent to look at this issue.”

Given this rapid clip, Musk and others are calling on those building these products to carefully consider the ethical implications. At the Puerto Rico conference, delegates signed an open letter pledging to conduct AI research for good, while “avoiding potential pitfalls.” Musk signed the letter too. “Here are all these leading AI researchers saying that AI safety is important,” Musk said yesterday. “I agree with them.”

[…]

Deciding the dosanddon’tsofscientificresearch is the kind of baseline ethical work that molecular biologists did during the 1975 Asilomar Conference on Recombinant DNA, where they agreed on safety standards designedtopreventmanmade genetically modified organisms from posing a threat to the public. The Asilomar conference had a much more concrete result than the Puerto Rico AI confab.

 

Ref: AI Has Arrived, and That Really Worries the World’s Brightest Minds – Wired