Category Archives: T – ethics

Ethics of Sex Robots

We come to confusing areas when we start thinking about sentience and the ability to feel and experience emotions.

The robot’s form is what remains disconcerting, at least to me. Unlike a bloodless small squid stuffed into a plastic holder, this sex object actually resembles a whole human, along with that fake human having independent movement. Worse still are ideas raised by popular science-fiction regarding sentience – but for now, such concerns for artificial intelligence are far off (or perhaps impossible).

The idea that we can program something to “always consent” or “never refuse” is further discomforting to me. But we must wonder: how is it different to turning on an iPad? How is it different to the letter I type appearing on screen as I push these keys? Do we say the iPad or software is programmed to consent to my button pushing, swiping, clicking? No: We just assume a causal connection of “push button – get result”.

That’s the nature of tools. We don’t wonder about the hammer’s feelings being nailed, so why should we worry about a robot’s? Just because the robot has a human form doesn’t make it any less of a tool. It just has no property for feelings.

 

Ref: Robots and sex: creepy or cool? – TheGuardian

The New Rules of Robot/Human Society

 

Key sentences:

– It’s important to think who we are as humans and how we develop as a society.
– In the ancient time we used to believe that being moral was to transcend all your emotional responses and come up with the perfect analysis. But actually it offers (more things to play): our ability to read the emotions of others; eye consciousness; our understandings of habits and rituals and the meaning of gesturesIt’s not clear how we get that kind of understanding or appreciation into these systems.
– They may be able to win the game of Jeopardy and the danger of that is that it will make us liable to attribute levels or kind of intelligence to them that they do not have. And may lead to situations when we become increasingly reliant on them managing tools that they won’t really know how to manage when that idiosyncratic truly dangerous situation arise.
– It becomes much more easy to distance yourself from the responsibility and I think in the case of autonomous systems it’s a really big question because if these things accidentally kill people or commit something which if it was done by a human we would consider a war crime, now it’s being done by a machine so is that just a technological or a war crime.

Conscience of a Machine

Of course, there is a sense in which autonomous machines of this sort are not really ethical agents. To speak of their needing a conscience strikes me as a metaphorical use of language. The operation of their “conscience” or “ethical system” will not really resemble what has counted as moral reasoning or even moral intuition among human beings. They will do as they are programmed to do. The question is, What will they be programmed to do in such circumstances? What ethical system will animate the programming decisions? Will driverless cars be Kantians, obeying one rule invariably; or will they be Benthamites, calculating the greatest good for the greatest number?

 […]

Such a machine seems to enter into the world of morally consequential action that until now has been occupied exclusively by human beings, but they do so without a capacity to be burdened by the weight of the tragic, to be troubled by guilt, or to be held to account in any sort of meaningful and satisfying way. They will, in other words, lose no sleep over their decisions, whatever those may be.

We have an unfortunate tendency to adapt, under the spell of metaphor, our understanding of human experience to the characteristics of our machines. Take memory for example. Having first decided, by analogy, to call a computer’s capacity to store information “memory,” we then reversed the direction of the metaphor and came to understand human memory by analogy to computer “memory,” i.e., as mere storage. So now we casually talk of offloading the work of memory or of Google being a better substitute for human memory without any thought for how human memory is related to perceptionunderstandingcreativityidentity, and more.

I can too easily imagine a similar scenario wherein we get into the habit of calling the algorithms by which machines are programmed to make ethically significant decisions the machine’s “conscience,” and then turn around, reverse the direction of the metaphor, and come to understand human conscience by analogy to what the machine does. This would result in an impoverishment of the moral life.

Will we then begin to think of the tragic sense, guilt, pity, and the necessity of wrestling with moral decisions as bugs in our “ethical systems”? Will we envy the literally ruth-less efficiency of “moral machines”? Will we prefer the Huxleyan comfort of a diminished moral sense, or will we claim the right to be unhappy, to be troubled by fully realized human conscience?

This is, of course, not merely a matter of making the “right” decisions. Part of what makes programming “ethical systems” troublesome is precisely our inability to arrive at a consensus about what is the right decision in such cases. But even if we could arrive at some sort of consensus, the risk I’m envisioning would remain. The moral weightiness of human existence does not reside solely in the moment of decision, it extends beyond the moment to a life burdened by the consequences of that action. It is precisely this “living with” our decisions that a machine conscience cannot know.

Role of Killer Robots

According to Heyns’s 2013 report, South Korea operates “surveillance and security guard robots” in the demilitarized zone that buffers it from North Korea. Although there is an automatic mode available on the Samsung machines, soldiers control them remotely.

The U.S. and Germany possess robots that automatically target and destroy incoming mortar fire. They can also likely locate the source of the mortar fire, according to Noel Sharkey, a University of Sheffield roboticist who is active in the “Stop Killer Robots” campaign.

And of course there are drones. While many get their orders directly from a human operator, unmanned aircraft operated by Israel, the U.K. and the U.S. are capable of tracking and firing on aircraft and missiles. On some of its Navy cruisers, the U.S. also operates Phalanx, a stationary system that can track and engage anti-ship missiles and aircraft.

The Army is testing a gun-mounted ground vehicle, MAARS, that can fire on targets autonomously. One tiny drone, the Raven is primarily a surveillance vehicle but among its capabilities is “target acquisition.”

No one knows for sure what other technologies may be in development.

“Transparency when it comes to any kind of weapons system is generally very low, so it’s hard to know what governments really possess,” Michael Spies, a political affairs officer in the U.N.’s Office for Disarmament Affairs, told Singularity Hub.

At least publicly, the world’s military powers seem now to agree that robots should not be permitted to kill autonomously. That is among the criteria laid out in a November 2012 U.S. military directive that guides the development of autonomous weapons. The European Parliament recently established a non-binding ban for member states on using or developing robots that can kill without human participation.

Yet, even robots not specifically designed to make kill decisions could do so if they malfunctioned, or if their user experience made it easier to accept than reject automated targeting.

“The technology’s not fit for purpose as it stands, but as a computer scientist there are other things that bother me. I mean, how reliable is a computer system?” Sharkey, of Stop Killer Robots, said.

Sharkey noted that warrior robots would do battle with other warrior robots equipped with algorithms designed by an enemy army.

“If you have two competing algorithms and you don’t know the contents of the other person’s algorithm, you don’t know the outcome. Anything could happen,” he said.

For instance, when two sellers recently unknowingly competed for business on Amazon, the interactions of their two algorithms resulted in prices in the millions of dollars. Competing robot armies could destroy cities as their algorithms exponentially escalated, Sharkey said.

An even likelier outcome would be that human enemies would target the weaknesses of the robots’ algorithms to produce undesirable outcomes. For instance, say a machine that’s designed to destroy incoming mortar fire such as the U.S.’s C-RAM or Germany’sMANTIS, is also tasked with destroying the launcher. A terrorist group could place a launcher in a crowded urban area, where its neutralization would cause civilian casualties.

 

Ref: CONTROVERSY BREWS OVER ROLE OF ‘KILLER ROBOTS’ IN THEATER OF WAR – SingularityHub

Google Hires Ethics Board

In an apparent move to feed its smart-hardware ambitions, Google has bought an artificial intelligence startup, DeepMind, for somewhere in the ballpark of $500 million. Considering all of the data Google sifts through, and the fact that it might be getting into robotics, it’s not completely absurd that they’d want some software to give a robotic helping hand. (Facebook apparently wanted the company, too, and they’ve already made moves to wrangle their own sprawling web of information.) But the other part of this story is a little stranger: the deal reportedly came under the condition that Google create an “ethics board” for the project.

Google has set up an ethics board to oversee its work in artificial intelligence. The search giant has recently bought several robotics companies, along with Deep Mind, a British firm creating software that tries to help computers think like humans. One of its founders warned artificial intelligence is  ‘number 1 risk for this century,’ and believes it could play a part in human extinction.

The ethics board, revealed by web site The Information, is to ensure the projects are not abused.

‘Google has agreed to establish an ethics board to ensure the artificial intelligence technology isn’t abused, according to two people familiar with the deal,’ said The Information, which revealed the news. The DeepMind-Google ethics board is set to create a series of rules and restrictions over the use of the technology.

 

Ref: Google Buys AI Startup, Hires Ethics Board To Oversee It – PopSci
Ref: Google sets up artificial intelligence ethics board to curb the rise of the robots – DailyMail

 

Christian College Buys NAO

 

Last week, a Christian college in Matthews, North Carolina unveiled something unprecedented: a humanoid robot whose sole mission is to explore the ethical and theological ramifications of robotics.

“When the time comes for including or incorporating humanoid robots into society, the prospect of a knee-jerk kind of reaction from the religious community is fairly likely, unless there’s some dialogue that starts happening, and we start examining the issue more closely,” says Kevin Staley, an associate professor of theology at SES. Staley pushed for the purchase of the bot, and plans to use it for courses at the college, as well as in presentations around the country. The specific reaction Staley is worried about is a more extreme version of the standard, secular creep factor associated with many robots.

That’s oversimplifying Staley’s plans for his NAO, though not by much. Despite his desire to steer both religious and secular communities away from an assumption of evil among humanoid bots, his current stance is one of extreme caution. “I think it would be a mistake to just, carte-blanche, say it’s like any other tech, and adopt it and deal with the consequences as they happen,” says Staley. The theological danger, he believes, is in substituting robots for people in social and emotional interactions—a more spiritual variation on concerns about offloading eldercare to robots, or developing machines can act as friends or even lovers. “Ultimately, the end and purpose of human beings is to be in a restored, full and intended, right relationship with God,” says Staley. Engaging too closely with bots might be worse than simply wasting time and energy on an unfeeling machine. He believes it could weaken humanity’s connection with one another, and, by association, God.

 

Ref: Apocalypse NAO: Are Robots Threatening Your Immortal Soul? – PopSci
Ref: Seminary buys robot to study the ethics of technology – RNS

 

Ethical Autonomous Vehicles

 

Many car manufacturers are projecting that by 2025 most cars will operate on driveless systems. While it is valid to think that our roads will be safer as autonomous vehicles replace traditional cars, the unpredictability of real-life situations that involve the complexities of moral and ethical reasoning complicate this assumption.

How can such systems be designed to accommodate the complicatedness of ethical and moral reasoning? Just like choosing the color of a car, ethics can become a commodified feature in autonomous vehicles that one can buy, change, and repurchase, depending on personal taste.

Three distinct algorithms have been created – each adhering to a specific ethical principle/behaviour set-up – and embedded into driverless virtual cars that are operating in a simulated environment, where they will be confronted with ethical dilemmas.

 

Ref: Ethical Autonomous Vehicles

Algorithms <-> Taylorism

By breaking down every job into a sequence of small, discrete steps and then testing different ways of performing each one, Taylor created a set of precise instructions—an “algorithm,” we might say today—for how each worker should work. Midvale’s employees grumbled about the strict new regime, claiming that it turned them into little more than automatons, but the factory’s productivity soared.

More than a hundred years after the invention of the steam engine, the Industrial Revolution had at last found its philosophy and its philosopher. Taylor’s tight industrial choreography—his “system,” as he liked to call it—was embraced by manufacturers throughout the country and, in time, around the world. Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-and-motion studies to organize their work and configure the jobs of their workers. The goal, as Taylor defined it in his celebrated 1911 treatise, The Principles of Scientific Management, was to identify and adopt, for every job, the “one best method” of work and thereby to effect “the gradual substitution of science for rule of thumb throughout the mechanic arts.” Once his system was applied to all acts of manual labor, Taylor assured his followers, it would bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency. “In the past the man has been first,” he declared; “in the future the system must be first.”

Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”—the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.”

 

Ref: Is Google Making Us Stupid? – The Atlantic

Automation Can Take a Toll on Human’s Performance

Automation has become so sophisticated that on a typical passenger flight, a human pilot holds the controls for a grand total of just three minutes. What pilots spend a lot of time doing is monitoring screens and keying in data. They’ve become, it’s not much of an exaggeration to say, computer operators.

And that, many aviation and automation experts have concluded, is a problem. Overuse of automation erodes pilots’ expertise and dulls their reflexes, leading to what Jan Noyes, an ergonomics expert at Britain’s University of Bristol, terms “a de-skilling of the crew.” No one doubts that autopilot has contributed to improvements in flight safety over the years. It reduces pilot fatigue and provides advance warnings of problems, and it can keep a plane airborne should the crew become disabled. But the steady overall decline in plane crashes masks the recent arrival of “a spectacularly new type of accident,” says Raja Parasuraman, a psychology professor at George Mason University and a leading authority on automation. When an autopilot system fails, too many pilots, thrust abruptly into what has become a rare role, make mistakes.

The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world. That has always been true, but in recent years, as the locus of labor-saving technology has shifted from machinery to software, automation has become ever more pervasive, even as its workings have become more hidden from us. Seeking convenience, speed, and efficiency, we rush to off-load work to computers without reflecting on what we might be sacrificing as a result.

[…]

A hundred years ago, the British mathematician and philosopher Alfred North Whitehead wrote, “Civilization advances by extending the number of important operations which we can perform without thinking about them.” It’s hard to imagine a more confident expression of faith in automation. Implicit in Whitehead’s words is a belief in a hierarchy of human activities: Every time we off-load a job to a tool or a machine, we free ourselves to climb to a higher pursuit, one requiring greater dexterity, deeper intelligence, or a broader perspective. We may lose something with each upward step, but what we gain is, in the long run, far greater.

History provides plenty of evidence to support Whitehead. We humans have been handing off chores, both physical and mental, to tools since the invention of the lever, the wheel, and the counting bead. But Whitehead’s observation should not be mistaken for a universal truth. He was writing when automation tended to be limited to distinct, well-defined, and repetitive tasks—weaving fabric with a steam loom, adding numbers with a mechanical calculator. Automation is different now. Computers can be programmed to perform complex activities in which a succession of tightly coordinated tasks is carried out through an evaluation of many variables. Many software programs take on intellectual work—observing and sensing, analyzing and judging, even making decisions—that until recently was considered the preserve of humans. That may leave the person operating the computer to play the role of a high-tech clerk—entering data, monitoring outputs, and watching for failures. Rather than opening new frontiers of thought and action, software ends up narrowing our focus. We trade subtle, specialized talents for more routine, less distinctive ones.

Most of us want to believe that automation frees us to spend our time on higher pursuits but doesn’t otherwise alter the way we behave or think. That view is a fallacy—an expression of what scholars of automation call the “substitution myth.” A labor-saving device doesn’t just provide a substitute for some isolated component of a job or other activity. It alters the character of the entire task, including the roles, attitudes, and skills of the people taking part. As Parasuraman and a colleague explained in a 2010 journal article, “Automation does not simply supplant human activity but rather changes it, often in ways unintended and unanticipated by the designers of automation.”

Psychologists have found that when we work with computers, we often fall victim to two cognitive ailments—complacency and bias—that can undercut our performance and lead to mistakes. Automation complacency occurs when a computer lulls us into a false sense of security. Confident that the machine will work flawlessly and handle any problem that crops up, we allow our attention to drift. We become disengaged from our work, and our awareness of what’s going on around us fades. Automation bias occurs when we place too much faith in the accuracy of the information coming through our monitors. Our trust in the software becomes so strong that we ignore or discount other information sources, including our own eyes and ears. When a computer provides incorrect or insufficient data, we remain oblivious to the error.

Examples of complacency and bias have been well documented in high-risk situations—on flight decks and battlefields, in factory control rooms—but recent studies suggest that the problems can bedevil anyone working with a computer. Many radiologists today use analytical software to highlight suspicious areas on mammograms. Usually, the highlights aid in the discovery of disease. But they can also have the opposite effect. Biased by the software’s suggestions, radiologists may give cursory attention to the areas of an image that haven’t been highlighted, sometimes overlooking an early-stage tumor. Most of us have experienced complacency when at a computer. In using e-mail or word-processing software, we become less proficient proofreaders when we know that a spell-checker is at work.

[…]

Who needs humans, anyway? That question, in one rhetorical form or another, comes up frequently in discussions of automation. If computers’ abilities are expanding so quickly and if people, by comparison, seem slow, clumsy, and error-prone, why not build immaculately self-contained systems that perform flawlessly without any human oversight or intervention? Why not take the human factor out of the equation? The technology theorist Kevin Kelly, commenting on the link between automation and pilot error, argued that the obvious solution is to develop an entirely autonomous autopilot: “Human pilots should not be flying planes in the long run.” The Silicon Valley venture capitalist Vinod Khosla recently suggested that health care will be much improved when medical software—which he has dubbed “Doctor Algorithm”—evolves from assisting primary-care physicians in making diagnoses to replacing the doctors entirely. The cure for imperfect automation is total automation.

 

Ref: All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines – The Atlantic

Ethical Implications of Engineers’ Works

The algorithms that extract highly specific information from an otherwise impenetrable amount of data have been conceived and built by flesh and blood, engineers with highly sophisticated technical knowledge. Did they know the use to which their algorithms would be put? If not, should they have been mindful of the potential for misuse? Either way, should they be held partly responsible or were they just “doing their job”?

[…]

Our ethics have become mostly technical: how to design properly, how to not cut corners, how to serve our clients well. We work hard to prevent failure of the systems we build, but only in relation to what these systems are meant to do, rather than the way they might actually be utilised, or whether they should have been built at all. We are not amoral, far from it; it’s just that we have steered ourselves into a place where our morality has a smaller scope.

Engineers have, in many ways, built the modern world and helped improve the lives of many. Of this, we are rightfully proud. What’s more, only a very small minority of engineers is in the business of making weapons or privacy-invading algorithms. However, we are part and parcel of industrial modernity with all its might, advantages and flaws, and we we therefore contribute to human suffering as well as flourishing

 

Ref: As engineers, we must consider the ethical implications of our work – TheGuardian