Category Archives: T – ethics

Losing Humanity: The Case against Killer Robots

On November 21, 2012, the US Department of Defense issued its first public policy on autonomy in weapons systems. Directive Number 3000.09 (the Directive) lays out guidelines for the development and use of autonomous and semi-autonomous weapon systems by the Department of Defense. The Directive also represents the first policy announcement by any country on fully autonomous weapons, which do not yet exist but would be designed to select and engage targets without human intervention.

The Directive does not put in place such a preemptive ban. For a period of up to ten years, however, it allows the Department of Defense to develop or use only fully autonomous systems that deliver non-lethal force, unless department officials waive the policy at a high level. Importantly, the Directive also recognizes some of the dangers to civilians of fully autonomous weapons and the need for prohibitions or controls, including the basic requirement that a human being be “in the loop” when decisions are made to use lethal force. The Directive is in effect a moratorium on fully autonomous weapons with the possibility for certain waivers. It also establishes guidelines for other types of autonomous and semi-autonomous systems.

While a positive step, the Directive does not resolve the moral, legal, and practical problems posed by the potential development of fully autonomous systems. As noted, it is initially valid for a period of only five to ten years, and may be overridden by high level Pentagon officials. It establishes testing requirements that may be unfeasible, fails to address all technological concerns, and uses ambiguous terms. It also appears to allow for transfer of fully autonomous systems to other nations and does not apply to other parts of the US government, such as the Central Intelligence Agency (CIA). Finally, it lays out a policy of voluntary self-restraint that may not be sustainable if other countries begin to deploy fully autonomous weapons systems, and the United States feels pressure to follow suit.

 

Ref: Review of the 2012 US Policy on Autonomy in Weapons Systems – Human Rights Watch
Ref: Say no to killer robots – The Engineer
Ref: Losing Humanity: The Case against Killer Robots – Human Rights Watch

The Turing Oath

In light of some recent unethical behaviour in software development, and after a user on Hacker News suggested that software developers have their own “version” of a Hippocratic Oath, I figured I’d try my hand at drafting an initial attempt.

The oath deals with user privacy and ethical handling of their personal information. This primarily caters to web applications which hold the personal information of users. It’s named after Alan Turing because he was a remarkable person who advanced computer science by great leaps.

The Promise to Develop Ethical Software / The Turing Oath

1. Privacy

I swear to respect the privacy of the user and secure all personal information in accordance with current standards.

I swear to not invade the private space of the user by sending spam or exploiting their trust.

I swear to be transparent in the information I keep on the user and to allow them to access it at any time.

I swear to release and remove personal user information at their request.

2. Security

I swear to responsibly disclose any and all software vulnerabilities that come to my attention so that they may be fixed.

I swear to not design software for the purpose of exploiting a vulnerability, damaging another computer system or exploiting a user with the intent to cause harm.

3. Patents

I swear to not become a patent troll and stifle innovation of software by applying for or enforcing patents on algorithms and software features that clearly should not be patented.
4. Criminal Activity

I swear to take no action to aid or abet the perpetration of crimes against humanity or other war crimes.

 

Ref: The Turing Oath – Github

Campaign to Stop Killer Robots

 

Over the past decade, the expanded use of unmanned armed vehicles has dramatically changed warfare, bringing new humanitarian and legal challenges. Now rapid advances in technology are resulting in efforts to develop fully autonomous weapons. These robotic weapons would be able to choose and fire on targets on their own, without any human intervention. This capability would pose a fundamental challenge to the protection of civilians and to compliance with international human rights and humanitarian law.

Several nations with high-tech militaries, including China, Israel, Russia, the United Kingdom, and the United States, are moving toward systems that would give greater combat autonomy to machines. If one or more chooses to deploy fully autonomous weapons, a large step beyond remote-controlled armed drones, others may feel compelled to abandon policies of restraint, leading to a robotic arms race. Agreement is needed now to establish controls on these weapons before investments, technological momentum, and new military doctrine make it difficult to change course.

Allowing life or death decisions to be made by machines crosses a fundamental moral line. Autonomous robots would lack human judgment and the ability to understand context. These qualities are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack.  As a result fully autonomous weapons would not meet the requirements of the laws of war.

Replacing human troops with machines could make the decision to go to war easier, which would shift the burden of armed conflict further onto civilians. The use of fully autonomous weapons would create an accountability gap as there is no clarity on who would be legally responsible for a robot’s actions: the commander, programmer, manufacturer, or robot itself? Without accountability, these parties would have less incentive to ensure robots did not endanger civilians and victims would be left unsatisfied that someone was punished for the harm they experienced.

 

Ref: Campaign to Stop Killer Robots

Pygmalion <-> AI

 

Artificial intelligence is arguably the most useless technology that humans have ever aspired to possess. Actually, let me clarify. It would be useful to have a robot that could make independent decisions while, say, exploring a distant planet, or defusing a bomb. But the ultimate aspiration of AI was never just to add autonomy to a robot’s operating system. The idea wasn’t to enable a computer to search data faster by ‘understanding patterns’, or communicate with its human masters via natural language. The dream of AI was — and is — to create a machine that is conscious. AI means building a mechanical human being. And this goal, as supposedly rational technological projects go, is deeply strange.

[…]

Technology is a cultural phenomenon, and as such it is molded by our cultural values. We prefer good health to sickness so we develop medicine. We value wealth and freedom over poverty and bondage, so we invent markets and the multitudinous thingummies of comfort. We are curious, so we aim for the stars. Yet when it comes to creating conscious simulacra of ourselves, what exactly is our motive? What deep emotions drive us to imagine, and strive to create, machines in our own image? If it is not fear, or want, or curiosity, then what is it? Are we indulging in abject narcissism? Are we being unforgivably vain? Or could it be because of love?

But machines were objects of erotic speculation long before Turing entered the scene. Western literature, ancient and modern, is strewn with mechanical lovers. Consider Pygmalion, the Cypriot sculptor and favorite of Aphrodite. Ovid, in his Metamorphoses, describes him carving a perfect woman out of ivory. Her name is Galatea and she’s so lifelike that Pygmalion immediately falls in love with her. He prays to Aphrodite to make the statue come to life. The love goddess already knows a thing or two about beautiful, non-biological maidens: her husband Hephaestus has constructed several good-looking fembots to lend a hand in his Olympian workshop. She grants Pygmalion’s wish; Pygmalion kisses his perfect creation, and Galatea becomes a real woman. They live happily ever after.

[…]

As the 20th century came into its own, Pygmalion collided with modernity and its various theories about the human mind: psychoanalysis, behaviourist psychology, the tabula rasa whereby one writes the algorithm of personhood upon a clean slate. Galatea becomes Maria, the robot in Fritz Lang’s epic film Metropolis (1927); she is less innocent now, a temptress performing the manic and deeply erotic dance of Babylon in front of goggling men.

 

Ref: Love Machines – Aeon Magazine

To Save Everything, Click Here

Too much assault and battery creates a more serious problem: wrongful appropriation, as Morozov tends to borrow heavily, without attribution, from those he attacks. His critique of Google and other firms engaged in “algorithmic gatekeeping”is basically taken from Lessig’s first book, “Code and Other Laws of Cyberspace,” in which Lessig argued that technology is necessarily ideological and that choices embodied in code, unlike law, are dangerously insulated from political debate. Morozov presents these ideas as his own and, instead of crediting Lessig, bludgeons him repeatedly. Similarly, Morozov warns readers of the dangers of excessively perfect technologies as if Jonathan Zittrain hadn’t been saying the same thing for the past 10 years. His failure to credit his targets gives the misimpression that Morozov figured it all out himself and that everyone else is an idiot.

Answers from Evgeny Morozov: Recycle the Cycle I, Recycle the Cycle II

 

Ref: Book review: ‘To Save Everything, Click Here’ by Evgeny Morozov – Washington Post

Can an Algorithm be Wrong?

[…]

But there is a tension between what we understand these algorithms to be, what we need them to be, and what they in fact are. We do not have a sufficient vocabulary for assessing the intervention of these algorithms. We’re not adept at appreciating what it takes to design a tool like Trends – one that appears to effortlessly identify what’s going on, yet also makes distinct and motivated choices. We don’t have a language for the unexpected associations algorithms make, beyond the intention (or even comprehension) of their designers (Ananny 2011). Most importantly, we have not fully recognized how these algorithms attempt to produce representations of the wants or concerns of the public, and as such, run into the classic problem of political representation: who claims to know the mind of the public, and how do they claim to know it?

[…]

Beyond search, we are surrounded by algorithmic tools that offer to help us navigate online platforms and social networks, based not on what we want, but on what all of their users do. When Facebook, YouTube, or Digg offer to mathematically and in real time report what is “most popular” or “liked” or “most viewed” or “best selling” or “most commented” or “highest rated,” they are curating a list whose legitimacy is built on the promise that it has not been curated, that it is the product of aggregate user activity itself. When Amazon recommends a book based on matching your purchases to those of its other customers, or Demand Media commissions news based on aggregate search queries (Anderson 2011), their accuracy and relevance depend on the promise of an algorithmic calculation paired with the massive, even exhaustive, corpus of the traces we all leave.

We might, then, pursue the question of the algorithm’s politics further. The Trends algorithm does have criteria built in: criteria that help produce the particular Trends results we see, criteria that are more complex and opaque than some users take them to be, criteria that could have produced the absence of the term #occupywallstreet that critics noted. But further, the criteria that animate the Trends algorithm also presume a shape and character to the public they intend to measure, and in doing so, help to construct publics in that image.

 

Ref: Can an Algorithm be Wrong? – Limn

Machines of Laughter and Forgetting

On this account, technology can save us a lot of cognitive effort, for “thinking” needs to happen only once, at the design stage. We’ll surround ourselves with gadgets and artifacts that will do exactly what they are meant to do — and they’ll do it in a frictionless, invisible way. “The ideal system so buries the technology that the user is not even aware of its presence,” announced the design guru Donald Norman in his landmark 1998 book, “The Invisible Computer.” But is that what we really want?

The hidden truth about many attempts to “bury” technology is that they embody an amoral and unsustainable vision. Pick any electrical appliance in your kitchen. The odds are that you have no idea how much electricity it consumes, let alone how it compares to other appliances and households. This ignorance is neither natural nor inevitable; it stems from a conscious decision by the designer of that kitchen appliance to free up your “cognitive resources” so that you can unleash your inner Oscar Wilde on “contemplating” other things. Multiply such ignorance by a few billion, and global warming no longer looks like a mystery.

Whitehead, it seems, was either wrong or extremely selective: on many important issues, civilization only destroys itself by extending the number of important operations that we can perform without thinking about them. On many issues, we want more thinking, not less.

Take privacy. Opening browser tabs is easy, as is using our Facebook account to navigate from site to site. In fact, we often do so unthinkingly. Given that our online tools and platforms are built in a way to make our browsing experience as frictionless as possible, is it any surprise that so much of our personal information is disclosed without our ever realizing it?

This, too, is not inevitable: designed differently, our digital infrastructure could provide many more opportunities for reflection. In a recent paper, a group of Cornell researchers proposed that our browsers could bombard us with strange but provocative messages to make us alert to the very information infrastructure that some designers have done their best to conceal. Imagine being told that “you visited 592 Web sites this week. That’s .5 times the number of Web pages on the whole Internet in 1994!”

The goal here is not to hit us with a piece of statistics — sheer numbers rarely lead to complex narratives — but to tell a story that can get us thinking about things we’d rather not be thinking about. So let us not give in to technophobia just yet: we should not go back to doing everything by hand just because it can lead to more thinking.

Rather, we must distribute the thinking process equally. Instead of having the designer think through all the moral and political implications of technology use before it reaches users — an impossible task — we must find a way to get users to do some of that thinking themselves.

Alas, most designers, following Wilde, think of technologies as nothing more than mechanical slaves that must maximize efficiency. But some are realizing that technologies don’t have to be just trivial problem-solvers: they can also be subversive troublemakers, making us question our habits and received ideas.

 

Ref: Machines of Laughter and Forgetting – NewYork Times
Ref: “Everybody Knows What You’re Doing”: A Critical Design Approach to Personal Informatics – Cornell University

 

Algorithmic Rape Jokes in Amazon

 

A t-shirt company called Solid Gold Bomb was caught selling shirts with the slogan “KEEP CALM and RAPE A LOT” on them. They also sold shirts like “KEEP CALM and CHOKE HER” and “KEEP CALM and PUNCHHER”. The Internet—especially the UK Internet—exploded.

How did this happen?

“Algorithms!”

[…]

Pete Ashton argues that—because the jokes were generated by a misbehaving script—“as mistakes go it’s a fairly excusable one, assuming they now act on it”. He suggests that the reason people got so upset was a lack of digital literacy. I suggest that the reason people got upset was that a company’s shoddy QA practices allowed a rape joke to go live.

Anyone who’s worked with software should know that the actual typing of code is a relatively small part of the overall programming work. Designing the program before you start coding, and debugging it after you’ve created it is the bulk of the job.

Generative programs are force multipliers. Small initial decisions can have massive consequences. The greater your reach, the greater your responsibility to manage your output. When Facebook makes an error that affects 0.1% of users, it means 1 million people got fucked up.

‘We didn’t cause a rape joke to happen, we allowed a rape joke to happen,’ is not a compelling excuse. It betrays a lack of digital literacy.

Interesting comments from people:

People, enough of the ‘A big algorithm did it and ran away’ explanations (eg. http://iam.peteashton.com/keep-calm-rape-tshirt-amazon/ …) – algorithms have politics too – @gsvoss

 

I’m REALLY tired of the “it’s the computer program” excuse for inexcusable behaviour. Behind every computer algorithm, a human being is sitting there programming. Use your “real” brains, you idiots, and join the real world. There are no excuses for this. None. Period. – jen

 

Not good enough, I’m afraid. The same company are still selling a t-shirt that says ‘Keep calm and hit her’.
No computer generated that. Why, for example, doesn’t it say ‘hit him’?
Because someone ran an eye over it to ensure it was sufficiently ‘funny’ I would say.
If they were genuinely horrified by what their algorithm produced that t-shirt would be gone too. Seems to me they’re just a bunch of sad gits. – Ita Ryan

 

 

Ref: Algorithmic Rape Jokes in the Library of Babel – QuietBabylon (via algopop)