Proceed with Caution toward the Self-Driving Car

Impressive and touching as this demonstration is, it is also deceptive. Google’s cars follow a route that has already been driven at least once by a human, and a driver always sits behind the wheel, or in the passenger seat, in case of mishap. This isn’t purely to reassure pedestrians and other motorists. No system can yet match a human driver’s ability to respond to the unexpected, and sudden failure could be catastrophic at high speed.

But if autonomy requires constant supervision, it can also discourage it. Back in his office, Reimer showed me a chart that illustrates the relationship between a driver’s performance and the number of things he or she is doing. Unsurprisingly, at one end of the chart, performance drops dramatically as distraction increases. At the other end, however, where there is too little to keep the driver engaged, performance drops as well. Someone who is daydreaming while the car drives itself will be unprepared to take control when necessary.

Reimer also worries that relying too much on autonomy could cause drivers’ skills to atrophy. A parallel can be found in airplanes, where increasing reliance on autopilot technology over the past few decades has been blamed for reducing pilots’ manual flying abilities. A 2011 draft report commissioned by the Federal Aviation Administration suggested that overreliance on automation may have contributed to several recent crashes involving pilot error. Reimer thinks the same could happen to drivers. “Highly automated driving will reduce the actual physical miles driven, and a driver who loses half the miles driven is not going to be the same driver afterward,” he says. “By and large we’re forgetting about an important problem: how do you connect the human brain to this technology?”

Norman argues that autonomy also needs to be more attuned to how the driver is feeling. “As machines start to take over more and more, they need to be socialized; they need to improve the way they communicate and interact,” he writes. Reimer and colleagues at MIT have shown how this might be achieved, with a system that estimates a driver’s mental workload and attentiveness by using sensors on the dashboard to measure heart rate, skin conductance, and eye movement. This setup would inform a kind of adaptive automation: the car would make more or less use of its autonomous features depending on the driver’s level of distraction or engagement.

 

Ref: Proceed with Caution toward the Self-Driving Car – MIT Technology Review

Intelligent Robots can Behave more Ethically in the Battlefield than Humans

 

“My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can,” said Ronald C. Arkin, a computer scientist at Georgia Tech, who is designing software for battlefield robots under contract with the Army. “That’s the case I make.”

[…]

In a report to the Army last year, Dr. Arkin described some of the potential benefits of autonomous fighting robots. For one thing, they can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear. They can be built without anger or recklessness, Dr. Arkin wrote, and they can be made invulnerable to what he called “the psychological problem of ‘scenario fulfillment,’ ” which causes people to absorb new information more easily if it agrees with their pre-existing ideas.

His report drew on a 2006 survey by the surgeon general of the Army, which found that fewer than half of soldiers and marines serving in Iraq said that noncombatants should be treated with dignity and respect, and 17 percent said all civilians should be treated as insurgents. More than one-third said torture was acceptable under some conditions, and fewer than half said they would report a colleague for unethical battlefield behavior.

[…]

“It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield,” Dr. Arkin wrote in his report (PDF), “but I am convinced that they can perform more ethically than human soldiers are capable of.”

[…]

Daniel C. Dennett, a philosopher and cognitive scientist at Tufts University, agrees. “If we talk about training a robot to make distinctions that track moral relevance, that’s not beyond the pale at all,” he said. But, he added, letting machines make ethical judgments is “a moral issue that people should think about.”

 

Ref: A Soldier, Taking Orders From Its Ethical Judgment Center – NYTimes
Ref: MissionLab

Driverless Cars Are Further Away Than You Think

 

Several computers inside the car’s trunk perform split-second measurements and calculations, processing data pouring in from the sensors. Software assigns a value to each lane of the road based on the car’s speed and the behavior of nearby vehicles. Using a probabilistic technique that helps cancel out inaccuracies in sensor readings, this software decides whether to switch to another lane, to attempt to pass the car ahead, or to get out of the way of a vehicle approaching from behind. Commands are relayed to a separate computer that controls acceleration, braking, and steering. Yet another computer system monitors the behavior of everything involved with autonomous driving for signs of malfunction.

[…]

For one thing, many of the sensors and computers found in BMW’s car, and in other prototypes, are too expensive to be deployed widely. And achieving even more complete automation will probably mean using more advanced, more expensive sensors and computers. The spinning laser instrument, or LIDAR, seen on the roof of Google’s cars, for instance, provides the best 3-D image of the surrounding world, accurate down to two centimeters, but sells for around $80,000. Such instruments will also need to be miniaturized and redesigned, adding more cost, since few car designers would slap the existing ones on top of a sleek new model.

Cost will be just one factor, though. While several U.S. states have passed laws permitting autonomous cars to be tested on their roads, the National Highway Traffic Safety Administration has yet to devise regulations for testing and certifying the safety and reliability of autonomous features. Two major international treaties, the Vienna Convention on Road Traffic and the Geneva Convention on Road Traffic, may need to be changed for the cars to be used in Europe and the United States, as both documents state that a driver must be in full control of a vehicle at all times.

Most daunting, however, are the remaining computer science and artificial-­intelligence challenges. Automated driving will at first be limited to relatively simple situations, mainly highway driving, because the technology still can’t respond to uncertainties posed by oncoming traffic, rotaries, and pedestrians. And drivers will also almost certainly be expected to assume some sort of supervisory role, requiring them to be ready to retake control as soon as the system gets outside its comfort zone.

[…]

The relationship between human and robot driver could be surprisingly fraught. The problem, as I discovered during my BMW test drive, is that it’s all too easy to lose focus, and difficult to get it back. The difficulty of reëngaging distracted drivers is an issue that Bryan Reimer, a research scientist in MIT’s Age Lab, has well documented (see “Proceed with Caution toward the Self-Driving Car,” May/June 2013). Perhaps the “most inhibiting factors” in the development of driverless cars, he suggests, “will be factors related to the human experience.”

In an effort to address this issue, carmakers are thinking about ways to prevent drivers from becoming too distracted, and ways to bring them back to the driving task as smoothly as possible. This may mean monitoring drivers’ attention and alerting them if they’re becoming too disengaged. “The first generations [of autonomous cars] are going to require a driver to intervene at certain points,” Clifford Nass, codirector of Stanford University’s Center for Automotive Research, told me. “It turns out that may be the most dangerous moment for autonomous vehicles. We may have this terrible irony that when the car is driving autonomously it is much safer, but because of the inability of humans to get back in the loop it may ultimately be less safe.”

An important challenge with a system that drives all by itself, but only some of the time, is that it must be able to predict when it may be about to fail, to give the driver enough time to take over. This ability is limited by the range of a car’s sensors and by the inherent difficulty of predicting the outcome of a complex situation. “Maybe the driver is completely distracted,” Werner Huber said. “He takes five, six, seven seconds to come back to the driving task—that means the car has to know [in advance] when its limitation is reached. The challenge is very big.”

Before traveling to Germany, I visited John ­Leonard, an MIT professor who works on robot navigation, to find out more about the limits of vehicle automation. ­Leonard led one of the teams involved in the DARPA Urban Challenge, an event in 2007 that saw autonomous vehicles race across mocked-up city streets, complete with stop-sign intersections and moving traffic. The challenge inspired new research and new interest in autonomous driving, but ­Leonard is restrained in his enthusiasm for the commercial trajectory that autonomous driving has taken since then. “Some of these fundamental questions, about representing the world and being able to predict what might happen—we might still be decades behind humans with our machine technology,” he told me. “There are major, unsolved, difficult issues here. We have to be careful that we don’t overhype how well it works.”

Leonard suggested that much of the technology that has helped autonomous cars deal with complex urban environments in research projects—some of which is used in Google’s cars today—may never be cheap or compact enough to be employed in commercially available vehicles. This includes not just the LIDAR but also an inertial navigation system, which provides precise positioning information by monitoring the vehicle’s own movement and combining the resulting data with differential GPS and a highly accurate digital map. What’s more, poor weather can significantly degrade the reliability of sensors, ­Leonard said, and it may not always be feasible to rely heavily on a digital map, as so many prototype systems do. “If the system relies on a very accurate prior map, then it has to be robust to the situation of that map being wrong, and the work of keeping those maps up to date shouldn’t be underestimated,” ­he said.

 

Ref: Driverless Cars Are Further Away Than You Think – MIT Technology Review

N/A

Parpas stresses that algorithms are not a new phenomenon: “They’ve been used for decades – back to Alan Turing and the codebreakers, and beyond – but the current interest in them is due to the vast amounts of data now being generated and the need to process and understand it. They are now integrated into our lives. On the one hand, they are good because they free up our time and do mundane processes on our behalf. The questions being raised about algorithms at the moment are not about algorithms per se, but about the way society is structured with regard to data use and data privacy. It’s also about how models are being used to predict the future. There is currently an awkward marriage between data and algorithms. As technology evolves, there will be mistakes, but it is important to remember they are just a tool. We shouldn’t blame our tools.”

[…]

“By far the most complicated algorithms are to be found in science, where they are used to design new drugs or model the climate,” says Parpas. “But they are done within a controlled environment with clean data. It is easy to see if there is a bug in the algorithm. The difficulties come when they are used in the social sciences and financial trading, where there is less understanding of what the model and output should be, and where they are operating in a more dynamic environment. Scientists will take years to validate their algorithm, whereas a trader has just days to do so in a volatile environment.”

 

Ref: How algorithms rule the world – The Guardian

Computer H14

 

 

By the early ’60s, Byrne explains, companies had grown to depend on enormous IBM mainframe computers, and they were forced to install a new mainframe at each and every one of their branch offices. AT&T aimed to replace all those duplicate machines with a system that would allow a single mainframe to communicate with several remote locations via high-speed data connections. Ma Bell already had a near monopoly on voice communications, and this was its next conquest.

The rub was that many people feared a robopocalypse — a dystopian world where machines made man obsolete. Ma Bell also needed to reassure people that its machine-to-machine communication wouldn’t take over the planet. And what better way to ease their fears than Computer H14?

[…]

Luckily, H14 diagnoses the problem — a lapse in data communications and a missing circuit — and he provides a set of “flawless” recommendations that result in increased productivity, improved performance, and gobs of extra time for Charlie Magnetico — played by Juhl — to think all sorts of big thoughts. In short, AT&T’s machine-to-machine communications save the day.

But in the end, this film conveys much the same message as the one that came before it: Machines can make life easier, but not without the help of humans. H14′s recommendations are flawless only until one of those missiles nearly lands on his head.

 

Ref: Tech Time Warp of the Week: Jim Henson’s Muppet Computer, 1963 – Wired