The New Rules of Robot/Human Society

 

Key sentences:

– It’s important to think who we are as humans and how we develop as a society.
– In the ancient time we used to believe that being moral was to transcend all your emotional responses and come up with the perfect analysis. But actually it offers (more things to play): our ability to read the emotions of others; eye consciousness; our understandings of habits and rituals and the meaning of gesturesIt’s not clear how we get that kind of understanding or appreciation into these systems.
– They may be able to win the game of Jeopardy and the danger of that is that it will make us liable to attribute levels or kind of intelligence to them that they do not have. And may lead to situations when we become increasingly reliant on them managing tools that they won’t really know how to manage when that idiosyncratic truly dangerous situation arise.
– It becomes much more easy to distance yourself from the responsibility and I think in the case of autonomous systems it’s a really big question because if these things accidentally kill people or commit something which if it was done by a human we would consider a war crime, now it’s being done by a machine so is that just a technological or a war crime.

Conscience of a Machine

Of course, there is a sense in which autonomous machines of this sort are not really ethical agents. To speak of their needing a conscience strikes me as a metaphorical use of language. The operation of their “conscience” or “ethical system” will not really resemble what has counted as moral reasoning or even moral intuition among human beings. They will do as they are programmed to do. The question is, What will they be programmed to do in such circumstances? What ethical system will animate the programming decisions? Will driverless cars be Kantians, obeying one rule invariably; or will they be Benthamites, calculating the greatest good for the greatest number?

 […]

Such a machine seems to enter into the world of morally consequential action that until now has been occupied exclusively by human beings, but they do so without a capacity to be burdened by the weight of the tragic, to be troubled by guilt, or to be held to account in any sort of meaningful and satisfying way. They will, in other words, lose no sleep over their decisions, whatever those may be.

We have an unfortunate tendency to adapt, under the spell of metaphor, our understanding of human experience to the characteristics of our machines. Take memory for example. Having first decided, by analogy, to call a computer’s capacity to store information “memory,” we then reversed the direction of the metaphor and came to understand human memory by analogy to computer “memory,” i.e., as mere storage. So now we casually talk of offloading the work of memory or of Google being a better substitute for human memory without any thought for how human memory is related to perceptionunderstandingcreativityidentity, and more.

I can too easily imagine a similar scenario wherein we get into the habit of calling the algorithms by which machines are programmed to make ethically significant decisions the machine’s “conscience,” and then turn around, reverse the direction of the metaphor, and come to understand human conscience by analogy to what the machine does. This would result in an impoverishment of the moral life.

Will we then begin to think of the tragic sense, guilt, pity, and the necessity of wrestling with moral decisions as bugs in our “ethical systems”? Will we envy the literally ruth-less efficiency of “moral machines”? Will we prefer the Huxleyan comfort of a diminished moral sense, or will we claim the right to be unhappy, to be troubled by fully realized human conscience?

This is, of course, not merely a matter of making the “right” decisions. Part of what makes programming “ethical systems” troublesome is precisely our inability to arrive at a consensus about what is the right decision in such cases. But even if we could arrive at some sort of consensus, the risk I’m envisioning would remain. The moral weightiness of human existence does not reside solely in the moment of decision, it extends beyond the moment to a life burdened by the consequences of that action. It is precisely this “living with” our decisions that a machine conscience cannot know.

De Arte Combinatoria

The Dissertatio de arte combinatoria is an early work by Gottfried Leibniz published in 1666 in Leipzig. It is an extended version of his doctoral dissertation, written before the author had seriously undertaken the study of mathematics. The booklet was reissued without Leibniz’ consent in 1690, which prompted him to publish a brief explanatory notice in the Acta Eruditorum. During the following years he repeatedly expressed regrets about its being circulated as he considered it immature  Nevertheless it was a very original work and it provided the author the first glimpse of fame among the scholars of his time.

The main idea behind the text is that of an alphabet of human thought, which is attributed to Descartes. All concepts are nothing but combinations of a relatively small number of simple concepts, just as words are combinations of letters. All truths may be expressed as appropriate combinations of concepts, which can in turn be decomposed into simple ideas, rendering the analysis much easier. Therefore, this alphabet would provide a logic of invention, opposed to that of demonstration which was known so far. Since all sentences are composed of a subject and a predicate, one might

The first examples of use of his ars combinatoria are taken from law, the musical registry of an organ, and the Aristotelian theory of generation of elements from the four primary qualities. But philosophical applications are of greater importance. He cites the idea of Hobbes that all reasoning is just a computation.

Watson to Battle Brain Cancer

IBM will team up with the New York Genome Center to see whether Watson can use glioblastoma patients’ genetic information to prescribe custom-tailored treatments. This “personalized” approach to medicine has been hailed as the next big step in healthcare, but making sense of genetic data – winnowing useful information from impertinent nucleic chaff – can be an overwhelming task. That’s where Watson comes in.

[New York Genome Center CEO Rober Darnell] said that the project would start with 20 to 25 patients… Samples from those patients (including both healthy and cancerous tissue) would be subjected to extensive DNA sequencing, including both the genome and the RNA transcribed from it. “What comes out is an absolute gusher of information,” he said.

It should theoretically be possible to analyze that data and use it to customize a treatment that targets the specific mutations present in tumor cells. But right now, doing so requires a squad of highly trained geneticists, genomics experts, and clinicians. It’s a situation that Darnell said simply can’t scale to handle the patients with glioblastoma, much less other cancers.

Instead, that gusher of information is going to be pointed at Watson. John Kelly of IBM Research stepped up to describe Watson as a “cognitive system,” one that “mimics the capabilities of the human mind—some, but not all [capabilities].” The capabilities it does have include ingesting large volumes of information, identifying the information that’s relevant, and then learning from the results of its use. Kelley was extremely optimistic that Watson could bring new insights to cancer care. “We will have an impact on cancer and these other horrific diseases,” he told the audience. “It’s not a matter of if, it’s a matter of when—and the when is going to be very soon.”

Ref: IBM Will Use Watson To Battle Brain Cancer – io9

Role of Killer Robots

According to Heyns’s 2013 report, South Korea operates “surveillance and security guard robots” in the demilitarized zone that buffers it from North Korea. Although there is an automatic mode available on the Samsung machines, soldiers control them remotely.

The U.S. and Germany possess robots that automatically target and destroy incoming mortar fire. They can also likely locate the source of the mortar fire, according to Noel Sharkey, a University of Sheffield roboticist who is active in the “Stop Killer Robots” campaign.

And of course there are drones. While many get their orders directly from a human operator, unmanned aircraft operated by Israel, the U.K. and the U.S. are capable of tracking and firing on aircraft and missiles. On some of its Navy cruisers, the U.S. also operates Phalanx, a stationary system that can track and engage anti-ship missiles and aircraft.

The Army is testing a gun-mounted ground vehicle, MAARS, that can fire on targets autonomously. One tiny drone, the Raven is primarily a surveillance vehicle but among its capabilities is “target acquisition.”

No one knows for sure what other technologies may be in development.

“Transparency when it comes to any kind of weapons system is generally very low, so it’s hard to know what governments really possess,” Michael Spies, a political affairs officer in the U.N.’s Office for Disarmament Affairs, told Singularity Hub.

At least publicly, the world’s military powers seem now to agree that robots should not be permitted to kill autonomously. That is among the criteria laid out in a November 2012 U.S. military directive that guides the development of autonomous weapons. The European Parliament recently established a non-binding ban for member states on using or developing robots that can kill without human participation.

Yet, even robots not specifically designed to make kill decisions could do so if they malfunctioned, or if their user experience made it easier to accept than reject automated targeting.

“The technology’s not fit for purpose as it stands, but as a computer scientist there are other things that bother me. I mean, how reliable is a computer system?” Sharkey, of Stop Killer Robots, said.

Sharkey noted that warrior robots would do battle with other warrior robots equipped with algorithms designed by an enemy army.

“If you have two competing algorithms and you don’t know the contents of the other person’s algorithm, you don’t know the outcome. Anything could happen,” he said.

For instance, when two sellers recently unknowingly competed for business on Amazon, the interactions of their two algorithms resulted in prices in the millions of dollars. Competing robot armies could destroy cities as their algorithms exponentially escalated, Sharkey said.

An even likelier outcome would be that human enemies would target the weaknesses of the robots’ algorithms to produce undesirable outcomes. For instance, say a machine that’s designed to destroy incoming mortar fire such as the U.S.’s C-RAM or Germany’sMANTIS, is also tasked with destroying the launcher. A terrorist group could place a launcher in a crowded urban area, where its neutralization would cause civilian casualties.

 

Ref: CONTROVERSY BREWS OVER ROLE OF ‘KILLER ROBOTS’ IN THEATER OF WAR – SingularityHub