Category Archives: W – now

Self-driving cars: safer, but what of their morals

It’s relatively easy to write computer code that directs the car how to respond to a sudden dilemma. The hard part is deciding what that response should be.

“The problem is,who’sdeterminingwhatwewant?” asks Jeffrey Miller, a University of Southern Californiaprofessorwhodevelopsdriverlessvehiclesoftware. “You’re not going to have 100 percent buy-in that says, ‘Hit the guy on the right.'”

Companiesthataretestingdriverlesscarsarenotfocusingon these moral questions.

Thecompanymostaggressivelydevelopingself-drivingcarsisn’tacarmakeratall. Google has invested heavily in the technology, driving hundreds of thousands of miles on roads and highways in tricked-out Priuses and Lexus SUVs. Leaders at the Silicon Valley giant have said they want to get the technology to the public by 2017.

For now, Google is focused on mastering the most common driving scenarios, programming the cars to drive defensively in hopes of avoiding the rare instances when an accident is truly unavoidable.

“People are philosophizing about it, but the question about real-world capability and real-world events that can affect us, we really haven’t studied that issue,” said Ron Medford, the director of safety for Google’s self-driving car project.

[…]

Technological advances will only add to the complexity. Especially when in-car sensors become so acute they can, for example, differentiate between a motorcyclist wearing a helmet and a companion riding without one. If a collision is inevitable, should the car hit the person with a helmet because the injury risk might be less? But that would penalize the person who took extra precautions.

Lin said he has discussed the ethics of driverlesscarswithGoogleas well as automakers includingTesla, Nissan and BMW. As far as he knows, only BMW has formed an internal group to study the issue.

Uwe Higgen, head of BMW’s group technology office in Silicon Valley, said the automaker has brought together specialists in technology, ethics, social impact, and the law to discuss a range of issues related to carsthatdoever-moredrivinginsteadof people.

“This is a constant process going forward,” Higgen said.

 

Ref: Self-driving cars: safer, but what of their morals – HuffingtonPost

2014: A year of progress (Stop Killer Robots)

Spurred on by the campaign’s non-governmental organizations (NGOs) as well as by think tanks and academics, 2014 saw notable diplomatic progress and increased awareness in capitals around the world of the challenges posed by autonomous warfare,buttherewerefewsignalsthatnationalpolicyisanycloser to being developed. Only two nations have stated policy on autonomous weapons systems: a 2012 US Department of Defense directive permits the development and use of fully autonomous systems that deliver only non-lethal force, while the UK MinistryofDefencehasstated that it has “no plans to replace skilled military personnel with fully autonomous systems.”

Five nations—Cuba, Ecuador, Egypt, Pakistan, and the Holy See—have expressed support for the objective of a preemptive ban on fully autonomous weapons, but have yet to execute that commitment in law or policy. A number of nations have indicated support for the principle of human control over the selection of targets and use of force, indicating they see a need to draw the line at some point.

[…]

The year opened with a resolution by the European Parliament on 27 February on the use of armed drones that included a call to “ban the development, production and use of fully autonomous weapons which enable strikes to be carried out without human intervention.” Sponsored by the Greens/European Free Alliance group of Members of the European Parliament with cross-party support, the resolution was adopted by a vote of 534–49.

The first informal CCW meeting of experts held at the United Nations (UN) in Geneva on 13-16 May attracted “record attendance” with the participation of 86 states, UN agencies, the ICRC, and the Campaign to Stop Killer Robots. The campaign’s delegation contributed actively throughout the meeting, making statements in plenary, issuing briefing papers and reports, hosting four consecutive side events, and briefing media throughout. The chair and vice-chair of the International Committee for Robot Arms Control (ICRAC) gave expert presentations at the meeting, which ICRAC had urged be convened since 2009.

The 2014 experts meeting reviewed technical, legal, ethical, and operational questions relating to the emerging technology of lethal autonomous weapons systems, but did not take any decisions. Ambassador Jean-Hugues Simon-Michel of France provided a report of the meeting in his capacity as chair that summarized the main areas of interest and recommended further talks in 2015.

[…]

The report notes how experts and delegations described the potential for autonomous weapons systems to be “game changers” in military affairs,butobservedthereappearedtobelittle military interest in deploying fully autonomous weapons systems because of the need to retain human control and concerns over operationalrisks includingvulnerability to cyber attacks, lack of predictability, difficulties of adapting to a complex environment, and challenges of interoperability. Delegates also considered proliferation and the potential impact of autonomous weapons on international peace and security.

Delegates considered the impact of development of autonomous weapons systems on human dignity, highlighting the devolution of life and death decisions to a machine as a key ethical concern. Some asked if a machine could acquire capacities of moral reasoning and human judgment, which is the basis for respect of international humanitarian law principles and challenged the capacity of machine to respond to a moral dilemma.

There was acknowledgment that international humanitarian and human rights law applies to all new weapons but views were divided as to whether the weapons would be illegal under existing law or permitted in certain circumstances. The imperative of maintaining meaningful human control over targeting and attack decisions emerged as the primary point of common ground at the meeting.

[…]

Campaign representatives participated in discussions on autonomous weapons in 2014 convened by the Geneva Academy of International Humanitarian Law and Human Rights, which issued a briefing paper in Novemberonlegaldimensionsoftheissue, as well asattheWashingtonDC-based Center for New American Security, which began a project on “ethical autonomy” in 2014. Campaigners spoke at numerous academic events this year, including at Oxford University, University of California-Santa Barbara, and University of Pennsylvania Law School. They also presented at events convened by think tanks often in cooperation with government, such astheEUNon-ProliferationConsortium in Brussels and the UN-South Korea non-proliferation forum on Jeju Island. The campaign features in a Stockholm International Peace Research Institute (SIPRI) chapter on the “governance of autonomous weapons” included for the first time in the 2014 Yearbook edition.

Ref: 2014: A year of progress – Stop Killer Robots

How the Pentagon’s Skynet Would Automate War

Due to technological revolutions outside its control, the Department of Defense (DoD) anticipates the dawn of a bold new era of automated war within just 15 years. By then, they believe, wars could be fought entirely using intelligent robotic systems armed with advanced weapons.

Last week, US defense secretary Chuck Hagel announced the ‘Defense Innovation Initiative’—a sweeping plan to identify and develop cutting edge technology breakthroughs “over the next three to five years and beyond” to maintain global US “military-technological superiority.” Areas to be covered by the DoDprogrammeinclude robotics, autonomous systems,miniaturization, Big Data and advanced manufacturing, including 3D printing.

[…]

A key area emphasized by the Wells and Kadtke study is improving the US intelligence community’s ability to automatically analyze vast data sets without the need for human involvement.

Pointing out that “sensitive personal information” can now be easily mined from online sources and social media, they call for policies on “Personally Identifiable Information (PII) to determine the Department’s ability to make use of information from social media in domestic contingencies”—in other words, to determine under what conditions the Pentagon can use private information on American citizens obtained via data-mining of Facebook, Twitter, LinkedIn, Flickr and so on.

Their study argues that DoD can leverage “large-scale data collection” for medicine and society, through “monitoring of individuals and populations using sensors, wearable devices, and IoT [the ‘Internet of Things’]” which together “will provide detection and predictive analytics.” The Pentagon can build capacity for this “in partnership with large private sector providers, where the most innovative solutions are currently developing.”

[…]

Within this context of Big Data and cloud robotics, Kadtke and Wells enthuse that as unmanned robotic systems become more intelligent, the cheap manufacture of “armies of Kill Bots that can autonomously wage war” will soon be a reality. Robots could also become embedded in civilian life to perform “surveillance, infrastructure monitoring,policetelepresence, and homeland security applications.”

[…]

Perhaps the most disturbing dimension among the NDU study’s insights is the prospect that within the next decade, artificial intelligence (AI) research could spawn “strong AI”—or at least a form of “weak AI” that approximates some features of the former.

Strong AI should be able to simulate a wide range of human cognition, and include traits like consciousness, sentience, sapience, or self-awareness. Many now believe, Kadtke and Wells, observe, that “strong AI may be achieved sometime in the 2020s.”

[…]

Nearly half the people on the US government’s terrorism watch list of “known or suspected terrorists” have “no recognized terrorist group affiliation,” and more than half the victims of CIA drone-strikes over a single year were “assessed” as “Afghan, Pakistani and unknown extremists”—among others who were merely “suspected, associated with, or who probably” belonged to unidentified militant groups. Multiple studies show that a substantive number of drone strike victims are civilians—and a secret Obama administration memo released this summer under Freedom of Information reveals that thedroneprogramme authorizes the killing of civilians as inevitable collateral damage.

Indeed, flawed assumptions in the Pentagon’s classification systems for threat assessment mean that even “nonviolent political activists” might be conflated with potential ‘extremists’, who “support political violence” and thus pose a threat to US interests.

 

Ref: How the Pentagon’s Skynet Would Automate War – Motherboard

AI Has Arrived, and That Really Worries the World’s Brightest Minds

Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.

AI problems that seemed nearly unassailable justafewyears ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselvestoidentifycat videos. Robot dogs can now walk very much like their living counterparts.

“Things like computer vision are starting to work; speech recognition is starting to work There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AIethicistwho was at the event with Musk. “And that’s making it more urgent to look at this issue.”

Given this rapid clip, Musk and others are calling on those building these products to carefully consider the ethical implications. At the Puerto Rico conference, delegates signed an open letter pledging to conduct AI research for good, while “avoiding potential pitfalls.” Musk signed the letter too. “Here are all these leading AI researchers saying that AI safety is important,” Musk said yesterday. “I agree with them.”

[…]

Deciding the dosanddon’tsofscientificresearch is the kind of baseline ethical work that molecular biologists did during the 1975 Asilomar Conference on Recombinant DNA, where they agreed on safety standards designedtopreventmanmade genetically modified organisms from posing a threat to the public. The Asilomar conference had a much more concrete result than the Puerto Rico AI confab.

 

Ref: AI Has Arrived, and That Really Worries the World’s Brightest Minds – Wired

Stanford to host 100-year study on AI

Stanford University has invited leading thinkers from several institutions to begin a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.

[…]

“Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

[…]

“I’m very optimistic about the future and see great value ahead for humanity with advances in systems that can perceive, learn and reason,” said Horvitz, a distinguished scientist and managing director at Microsoft Research, who initiated AI100 as a private philanthropic initiative. “However, it is difficult to anticipate all of the opportunities and issues, so we need to create an enduring process.”

 

Ref: Stanford to host 100-year study on artificial intelligence – Stanford News

How Facebook Knows You Better Than Your Friends Do

This week, researchers from the University of Cambridge and Stanford University released a study indicating that Facebook may be better at judging people’s personalities than their closest friends, their spouses, and in some cases, even themselves. The study compared people’s Facebook “Likes” to their own answers in a personality questionnaire, as well as the answers provided by their friends and family, and found that Facebook outperformed any human, no matter their relation to the subjects.

[…]

The researchers began with a 100-item personality questionnaire that went viral after David Stillwell,apsychometricsprofessoratCambridge, posted it on Facebook back in 2007. Respondents answered questions that were meant to root out five key personality traits: openness, conscientiousness, extraversion, agreeableness, and neuroticism. Based on that survey, the researchers scored each respondent in all five traits.

Then, the researchers created an algorithm and fed it with everyrespondent’spersonalityscores, as well as their “Likes,” to which subjects voluntarily gave researchers access. The researchers only included “Likes”thatrespondentssharedwithat least 20 other respondents. That enabled the model to connect certain “Likes” to certain personality traits. If, for instance, several people who liked Snooki on Facebook also scored high in the extroverted category, the system would learn that Snooki lovers are more outgoing. The more “Likes” the system saw, the better its judgment became.

In the end, the researchers found that with information on just ten Facebook “Likes,” the algorithm was more accurate than the average person’s colleague. With 150 “Likes,” it could outsmart people’s families, and with 300 “Likes,”itcouldbestaperson’sspouse.

[…]

While the researchers admit the results were surprising, they say there’s good reason for it. For starters, computers don’t forget. While our judgment of people may change based on our most recent — or most dramatic — interactions with them, computers give a person’s entire history equal weight. Computers alsodon’thaveexperiencesoropinionsof their own. They’re not limited by their own cultural references, and they don’t find certain personality traits, likes, or interests good or bad. “Computers don’t understand that certain personalities are more socially desirable,” Kosinski says. “Computers don’t like any of us.”

 

How Facebook Knows You Better Than Your Friends Do – Wired

Algorithms Are Great and All, But They Can Also Ruin Lives

On April 5, 2011, 41-year-old John Gass received a letter from the Massachusetts Registry of Motor Vehicles. The letter informed Gass that his driver’s license had been revoked and that he should stop driving, effective immediately. The only problem was that, as a conscientious driver who had not received so much as a traffic violation in years, Gass had no idea why it had been sent.

After several frantic phone calls, followed up by a hearing with Registry officials, he learned the reason: his image had been automatically flagged by a facial-recognition algorithm designed to scan through a database of millions of state driver’s licenses looking for potential criminal false identities. The algorithm had determined that Gass looked sufficiently like another Massachusetts driver that foul play was likely involved—and the automated letter from the Registry of Motor Vehicles was the end result.

The RMV itself was unsympathetic, claiming that it was the accused individual’s “burden” to clear his or her name in the event of any mistakes, and arguing that the pros of protecting the public far outweighed the inconvenience to the wrongly targeted few.

John Gass is hardly alone in being a victim of algorithms gone awry. In 2007, a glitch in the California Department of Health Services’ new automated computer system terminated the benefits of thousands of low-income seniors and people with disabilities. Without their premiums paid, Medicare canceled those citizens’ health care coverage.

[…]

Equally alarming is the possibility that an algorithm may falsely profile an individual as a terrorist: a fate that befalls roughly 1,500 unlucky airline travelers each week. Those fingered in the past as the result of data-matching errors include former Army majors, a four-year-old boy, and an American Airlines pilot—who was detained 80 times over the course of a single year.

[…]

“We are all so scared of human bias and inconsistency,” says Danielle Citron, professor of law at the University of Maryland. “At the same time, we are overconfident about what it is that computers can do.”

The mistake, Citron suggests, is that we “trust algorithms, because we think of them as objective, whereas the reality is that humans craft those algorithms and can embed in them all sorts of biases and perspectives.” To put it another way, a computer algorithm might be unbiased in its execution, but, as noted, this does not mean that there is not bias encoded within it.

 

Ref: Algorithms Are Great and All, But They Can Also Ruin Lives – Wired

Killer Robots

One of them is the Skunk, designed for crowd control. It can douse demonstrators with teargas.

“There could be a dignity issue here; being herded by drones would be like herding cattle,” he said.

But at least drones have a human at the controls.

“[Otherwise] you are giving [the power of life and death] to a machine,” said Heyns. “Normally, there is a human being to hold accountable.

“If it’s a robot, you can put it in jail until its batteries run flat but that’s not much of a punishment.”

Heyns said the advent of the new generation of weapons made it necessary for laws to be introduced that would prohibit the use of systems that could be operated without a significant level of human control.

“Technology is a tool and it should remain a tool, but it is a dangerous tool and should be held under scrutiny. We need to try to define the elements of needful human control,” he said.

Several organisations have voiced concerns about autonomous weapons. The Campaign to Stop Killer Robots wants a ban on fully autonomous weapons.

 

Ref: Stop Killer Robots While we Can – Time Live