Category Archives: W – future

Intelligent Robots can Behave more Ethically in the Battlefield than Humans

 

“My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can,” said Ronald C. Arkin, a computer scientist at Georgia Tech, who is designing software for battlefield robots under contract with the Army. “That’s the case I make.”

[…]

In a report to the Army last year, Dr. Arkin described some of the potential benefits of autonomous fighting robots. For one thing, they can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear. They can be built without anger or recklessness, Dr. Arkin wrote, and they can be made invulnerable to what he called “the psychological problem of ‘scenario fulfillment,’ ” which causes people to absorb new information more easily if it agrees with their pre-existing ideas.

His report drew on a 2006 survey by the surgeon general of the Army, which found that fewer than half of soldiers and marines serving in Iraq said that noncombatants should be treated with dignity and respect, and 17 percent said all civilians should be treated as insurgents. More than one-third said torture was acceptable under some conditions, and fewer than half said they would report a colleague for unethical battlefield behavior.

[…]

“It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield,” Dr. Arkin wrote in his report (PDF), “but I am convinced that they can perform more ethically than human soldiers are capable of.”

[…]

Daniel C. Dennett, a philosopher and cognitive scientist at Tufts University, agrees. “If we talk about training a robot to make distinctions that track moral relevance, that’s not beyond the pale at all,” he said. But, he added, letting machines make ethical judgments is “a moral issue that people should think about.”

 

Ref: A Soldier, Taking Orders From Its Ethical Judgment Center – NYTimes
Ref: MissionLab

Driverless Cars Are Further Away Than You Think

 

Several computers inside the car’s trunk perform split-second measurements and calculations, processing data pouring in from the sensors. Software assigns a value to each lane of the road based on the car’s speed and the behavior of nearby vehicles. Using a probabilistic technique that helps cancel out inaccuracies in sensor readings, this software decides whether to switch to another lane, to attempt to pass the car ahead, or to get out of the way of a vehicle approaching from behind. Commands are relayed to a separate computer that controls acceleration, braking, and steering. Yet another computer system monitors the behavior of everything involved with autonomous driving for signs of malfunction.

[…]

For one thing, many of the sensors and computers found in BMW’s car, and in other prototypes, are too expensive to be deployed widely. And achieving even more complete automation will probably mean using more advanced, more expensive sensors and computers. The spinning laser instrument, or LIDAR, seen on the roof of Google’s cars, for instance, provides the best 3-D image of the surrounding world, accurate down to two centimeters, but sells for around $80,000. Such instruments will also need to be miniaturized and redesigned, adding more cost, since few car designers would slap the existing ones on top of a sleek new model.

Cost will be just one factor, though. While several U.S. states have passed laws permitting autonomous cars to be tested on their roads, the National Highway Traffic Safety Administration has yet to devise regulations for testing and certifying the safety and reliability of autonomous features. Two major international treaties, the Vienna Convention on Road Traffic and the Geneva Convention on Road Traffic, may need to be changed for the cars to be used in Europe and the United States, as both documents state that a driver must be in full control of a vehicle at all times.

Most daunting, however, are the remaining computer science and artificial-­intelligence challenges. Automated driving will at first be limited to relatively simple situations, mainly highway driving, because the technology still can’t respond to uncertainties posed by oncoming traffic, rotaries, and pedestrians. And drivers will also almost certainly be expected to assume some sort of supervisory role, requiring them to be ready to retake control as soon as the system gets outside its comfort zone.

[…]

The relationship between human and robot driver could be surprisingly fraught. The problem, as I discovered during my BMW test drive, is that it’s all too easy to lose focus, and difficult to get it back. The difficulty of reëngaging distracted drivers is an issue that Bryan Reimer, a research scientist in MIT’s Age Lab, has well documented (see “Proceed with Caution toward the Self-Driving Car,” May/June 2013). Perhaps the “most inhibiting factors” in the development of driverless cars, he suggests, “will be factors related to the human experience.”

In an effort to address this issue, carmakers are thinking about ways to prevent drivers from becoming too distracted, and ways to bring them back to the driving task as smoothly as possible. This may mean monitoring drivers’ attention and alerting them if they’re becoming too disengaged. “The first generations [of autonomous cars] are going to require a driver to intervene at certain points,” Clifford Nass, codirector of Stanford University’s Center for Automotive Research, told me. “It turns out that may be the most dangerous moment for autonomous vehicles. We may have this terrible irony that when the car is driving autonomously it is much safer, but because of the inability of humans to get back in the loop it may ultimately be less safe.”

An important challenge with a system that drives all by itself, but only some of the time, is that it must be able to predict when it may be about to fail, to give the driver enough time to take over. This ability is limited by the range of a car’s sensors and by the inherent difficulty of predicting the outcome of a complex situation. “Maybe the driver is completely distracted,” Werner Huber said. “He takes five, six, seven seconds to come back to the driving task—that means the car has to know [in advance] when its limitation is reached. The challenge is very big.”

Before traveling to Germany, I visited John ­Leonard, an MIT professor who works on robot navigation, to find out more about the limits of vehicle automation. ­Leonard led one of the teams involved in the DARPA Urban Challenge, an event in 2007 that saw autonomous vehicles race across mocked-up city streets, complete with stop-sign intersections and moving traffic. The challenge inspired new research and new interest in autonomous driving, but ­Leonard is restrained in his enthusiasm for the commercial trajectory that autonomous driving has taken since then. “Some of these fundamental questions, about representing the world and being able to predict what might happen—we might still be decades behind humans with our machine technology,” he told me. “There are major, unsolved, difficult issues here. We have to be careful that we don’t overhype how well it works.”

Leonard suggested that much of the technology that has helped autonomous cars deal with complex urban environments in research projects—some of which is used in Google’s cars today—may never be cheap or compact enough to be employed in commercially available vehicles. This includes not just the LIDAR but also an inertial navigation system, which provides precise positioning information by monitoring the vehicle’s own movement and combining the resulting data with differential GPS and a highly accurate digital map. What’s more, poor weather can significantly degrade the reliability of sensors, ­Leonard said, and it may not always be feasible to rely heavily on a digital map, as so many prototype systems do. “If the system relies on a very accurate prior map, then it has to be robust to the situation of that map being wrong, and the work of keeping those maps up to date shouldn’t be underestimated,” ­he said.

 

Ref: Driverless Cars Are Further Away Than You Think – MIT Technology Review

Google Shopping Express

 

But the game goes deeper. As personal digital assistant apps such as Google Now become widespread, so does the idea of algorithms that can not only meet but anticipate our needs. Extend the concept from the purely digital into the realm of retail, and you have what some industry prognosticators are calling “ambient commerce.” In a sensor-rich future where not just phones but all kinds of objects are internet-connected, same-day delivery becomes just one component of a bigger instant gratification engine.

On the same day Google announced that its Shopping Express was available to all Bay Area residents, eBay Enterprise Marketing Solutions head of strategy John Sheldon was telling a roomful of clients that there will soon come a time when customers won’t be ordering stuff from eBay anymore. Instead, they’ll let their phones do it.

Sheldon believes the “internet of things” is creating a data-saturated environment that will soon envelope commerce. In a chat with WIRED, he describes a few hypothetical examples that sound like they’re already within reach. Imagine setting up a rule in Nike+, he says, to have the app order you a new pair of shoes after you run 300 miles. Or picture a bicycle helmet with a sensor that “knows” when a crash has happened, which prompts an app to order a new one.

Now consider an even more advanced scenario. A shirt has a sensor that detects moisture. And you find yourself stuck out in the rain without an umbrella. Not too many minutes after the downpour starts, a car pulls up alongside you. A courier steps out and hands you an umbrella — or possibly a rain jacket, depending on what rules you set up ahead of time for such a situation, perhaps using IFTTT.

“Ambient commerce is about consumers turning over their trust to the machine,” Sheldon says.

 

Ref: One Day, Google Will Deliver the Stuff You Want Before You Ask – Wired

The Ethics of Autonomous Cars – 2

If a small tree branch pokes out onto a highway and there’s no incoming traffic, we’d simply drift a little into the opposite lane and drive around it. But an automated car might come to a full stop, as it dutifully observes traffic laws that prohibit crossing a double-yellow line. This unexpected move would avoid bumping the object in front, but then cause a crash with the human drivers behind it.

Should we trust robotic cars to share our road, just because they are programmed to obey the law and avoid crashes?

[…]

Programmers still will need to instruct an automated car on how to act for the entire range of foreseeable scenarios, as well as lay down guiding principles for unforeseen scenarios. So programmers will need to confront this decision, even if we human drivers never have to in the real world. And it matters to the issue of responsibility and ethics whether an act was premeditated (as in the case of programming a robot car) or reflexively without any deliberation (as may be the case with human drivers in sudden crashes).

Anyway, there are many examples of car accidents every day that involve difficult choices, and robot cars will encounter at least those. For instance, if an animal darts in front of our moving car, we need to decide: whether it would be prudent to brake; if so, how hard to brake; whether to continue straight or swerve to the left of right; and so on. These decisions are influenced by environmental conditions (e.g., slippery road), obstacles on and off the road (e.g., other cars to the left and trees to the right), size of an obstacle (e.g., hitting a cow diminishes your survivability, compared to hitting a raccoon), second-order effects (e.g., crash with the car behind us, if we brake too hard), lives at risk in and outside the car (e.g., a baby passenger might mean the robot car should give greater weight to protecting its occupants), and so on.

[…]

In “robot ethics,” most of the attention so far has been focused on military drones. But cars are maybe the most iconic technology in America—forever changing cultural, economic, and political landscapes. They’ve made new forms of work possible and accelerated the pace of business, but they also waste our time in traffic. They rush countless patients to hospitals and deliver basic supplies to rural areas, but also continue to kill more than 30,000 people a year in the U.S. alone. They bring families closer together, but also farther away at the same time. They’re the reason we have suburbs, shopping malls, and fast-food restaurants, but also new environmental and social problems.

 

Ref: The Ethics of Autonomous Cars – TheAtlantic

 

The Ethics of Autonomous Cars

 

[…]

That’s how this puzzle relates to the non-identity problem posed by Oxford philosopher Derek Parfit in 1984. Suppose we face a policy choice of either depleting some natural resource or conserving it. By depleting it, we might raise the quality of life for people who currently exist, but we would decrease the quality of life for future generations; they would no longer have access to the same resource.

Say that the best we could do is make robot cars reduce traffic fatalities by 1,000 lives. That’s still pretty good. But if they did so by saving all 32,000 would-be victims while causing 31,000 entirely new victims, we wouldn’t be so quick to accept this trade — even if there’s a net savings of lives.

[…]

With this new set of victims, however, are we violating their right not to be killed? Not necessarily. If we view the right not to be killed as the right not to be an accident victim, well, no one has that right to begin with. We’re surrounded by both good luck and bad luck: accidents happen. (Even deontological – duty-based — or Kantian ethics could see this shift in the victim class as morally permissible given a non-violation of rights or duties, in addition to the consequentialist reasons based on numbers.)

[…]

Ethical dilemmas with robot cars aren’t just theoretical, and many new applied problems could arise: emergencies, abuse, theft, equipment failure, manual overrides, and many more that represent the spectrum of scenarios drivers currently face every day.

One of the most popular examples is the school-bus variant of the classic trolley problem in philosophy: On a narrow road, your robotic car detects an imminent head-on crash with a non-robotic vehicle — a school bus full of kids, or perhaps a carload of teenagers bent on playing “chicken” with you, knowing that your car is programmed to avoid crashes. Your car, naturally, swerves to avoid the crash, sending it into a ditch or a tree and killing you in the process.

At least with the bus, this is probably the right thing to do: to sacrifice yourself to save 30 or so schoolchildren. The automated car was stuck in a no-win situation and chose the lesser evil; it couldn’t plot a better solution than a human could.

But consider this: Do we now need a peek under the algorithmic hood before we purchase or ride in a robot car? Should the car’s crash-avoidance feature, and possible exploitations of it, be something explicitly disclosed to owners and their passengers — or even signaled to nearby pedestrians? Shouldn’t informed consent be required to operate or ride in something that may purposely cause our own deaths?

It’s one thing when you, the driver, makes a choice to sacrifice yourself. But it’s quite another for a machine to make that decision for you involuntarily.

 

Ref: The Ethics of Saving Lives With Autonomous Cars Are Far Murkier Than You Think – Wired
Ref: Ethics + Emerging Sciences Group

A New Machine Ecology is Evolving

The problem, however, is that this new digital environment features agents that are not only making decisions faster than we can comprehend, they are also making decisions in a way that defies traditional theories of finance. In other words, it has taken on the form of a machine ecology — one that includes virtual predators and prey.

Consequently, computer scientists are taking an ecological perspective by looking at the new environment in terms of a competitive population of adaptive trading agents.

“Even though each trading algorithm/robot is out to gain a profit at the expense of any other, and hence act as a predator, any algorithm which is trading has a market impact and hence can become noticeable to other algorithms,” said Neil Johnson, a professor of physics at the College of Arts and Sciences at the University of Miami (UM) and lead author of the new study. “So although they are all predators, some can then become the prey of other algorithms depending on the conditions. Just like animal predators can also fall prey to each other.”

When there’s a normal combination of prey and predators, he says, everything is in balance. But once predators are introduced that are too fast, they create extreme events.

“What we see with the new ultrafast computer algorithms is predatory trading,” he says. “In this case, the predator acts before the prey even knows it’s there.”

[…]

“It simply is faster than human predators (i.e. human traders) and the humans are inactive on that fast timescale,” says Johnson. “So the only active traders at subsecond timescales are all robots. So they compete against each other, and their collective actions define the movements in the market.”

In other words, they control the market movements. “Humans become inert and ineffective,” he says, “What we found, which is so surprising, is that the transition to the new ultrafast robotic ecology is so abrupt and strong.”

 

Ref: A new digital ecology is evolving, and humans are being left behind – io9
Ref:  Abrupt rise of new machine ecology beyond human response time – Nature

 

Corelet – New Programming Language for Cognitive Computing

 

Researchers from IBM are working on a new software front-end for their neuromorphic processor chips. The company is hoping to draw inspiration from its recent successes in “cognitive computing,” a line of R&D that’s best exemplified by Watson, the Jeopardy-playing AI. The new programming language will be necessary because once IBM’s cognitive computers become a reality, they’ll need a completely new one to run them. Many of today’s computers still use programming derived from FORTRAN, a language developed in the 1950s for ENIAC.

The new software runs on a conventional supercomputer, but it simulates the functioning of a massive network of neurosynaptic cores. Each core contains its own network of 256 neurons which function according to a new model in which digital neurons mimic the independent nature of biological neurons. Corelets, the equivalent of “programs,” specify the basic functioning of neurosynaptic cores and can be linked into more complex structures. Each corelet has 256 outputs and inputs, which are used to connect to one another.

“Traditional architecture is very sequential in nature, from memory to processor and back,” explained Dr. Dharmendra Modha in a recent Forbes article. “Our architecture is like a bunch of LEGO blocks with different features. Each corelet has a different function, then you compose them together.”

So, for example, a corelet can detect motion, the shape of an object, or sort images by color. Each corelet would run slowly, but the processing would be in parallel.

IBM has created more than 150 corelets as part of a library that programmers can tap.

Eventually, IBM hopes to create a cognitive computer scaled to 100 trillion synapses.

 

Ref: New Computer Programming Language Imitates The Human Brain – io9
Ref: Cognitive Computing Programming Paradigm: A Corelet Language for Composing Networks of Neurosynaptic Cores – IBM Research [paper]

Robots and Elder Care

 

Sherry Turkle, a professor of science, technology and society at the Massachusetts Institute of Technology and author of the book “Alone Together: Why We Expect More From Technology and Less From Each Other,” did a series of studies with Paro, a therapeutic robot that looks like a baby harp seal and is meant to have a calming effect on patients with dementia, Alzheimer’s and in health care facilities. The professor said she was troubled when she saw a 76-year-old woman share stories about her life with the robot.

“I felt like this isn’t amazing; this is sad. We have been reduced to spectators of a conversation that has no meaning,” she said. “Giving old people robots to talk to is a dystopian view that is being classified as utopian.” Professor Turkle said robots did not have a capacity to listen or understand something personal, and tricking patients to think they can is unethical.

[…]

“We are social beings, and we do develop social types of relationships with lots of things,” she said. “Think about the GPS in your car, you talk to it and it talks to you.” Dr. Rogers noted that people developed connections with their Roomba, the vacuum robot, by giving the machines names and buying costumes for them. “This isn’t a bad thing, it’s just what we do,” she said.

[…]

As the actor Frank Langella, who plays Frank in the movie, told NPR last year: “Every one of us is going to go through aging and all sorts of processes, many people suffering from dementia,” he said. “And if you put a machine in there to help, the notion of making it about love and buddy-ness and warmth is kind of scary in a way, because that’s what you should be doing with other human beings.”

 

Ref: Disruptions: Helper Robots Are Steered, Tentatively, to Care for the Aging – The New York Times

Ethic & Virtual Brain

Sandberg quoted Jeremy Bentham who famously said, “The question is not, can they reason? Nor can they talk? But can they suffer?” And indeed, scientists will need to be very sensitive to this point.

Sandberg also pointed out the work of Thomas Metzinger, who back in 2003 argued that it would be deeply horrendously unethical to develop conscious software — software that can suffer.

Metzinger had this to say about the prospect:

What would you say if someone came along and said, “Hey, we want to genetically engineer mentally retarded human infants! For reasons of scientific progress we need infants with certain cognitive and emotional deficits in order to study their postnatal psychological development — we urgently need some funding for this important and innovative kind of research!” You would certainly think this was not only an absurd and appalling but also a dangerous idea. It would hopefully not pass any ethics committee in the democratic world. However, what today’s ethics committees don’t see is how the first machines satisfying a minimally sufficient set of constraints for conscious experience could be just like such mentally retarded infants. They would suffer from all kinds of functional and representational deficits too. But they would now also subjectively experience those deficits. In addition, they would have no political lobby — no representatives in any ethics committee.

 

Ref: Would it be evil to build a functional brain inside a computer? – io9

Driver Behavior in an Emergency Situation in the Automated Highway System

Twenty participants completed test rides in a normal and an Automated Highway System (AHS) vehicle in a driving simulator. Three AHS conditions were tested: driving in a platoon of cars at 1 sec and at 0.25 sec time headway and driving as a platoon leader. Of particular interest was overreliance on the automated system, which was tested in an emergency condition where the automated system failed to function properly and the driver actively had to take over speed control to avoid an uncomfortable short headway of 0.1 m. In all conditions driver behavior and heart rate were registered, and ratings of activation, workload, safety, risk, and acceptance of the AHS were collected after the test rides. Results show lower physiological and subjectively experienced levels of activation and mental effort in conditions of automated driving. In the emergency situation, only half of the participants took over control, which supports the idea that AHS, as any automation, is susceptible to complacency.

 

Ref: What Will Happen When Your Driverless Car Crashes? – Paleofuture