Category Archives: W – now

Obviously Drivers Are Already Abusing Tesla’s Autopilot

Arriving in New York in record time, without being arrested or killed, is a personal victory for the drivers. More than that, though, it highlights how quickly and enthusiastically autonomous technology is likely to be adopted, and how tricky it may be to keep in check once drivers get their first taste of freedom behind the wheel.

[…]

Autopilot caused a few scares, Roy says, largely because the car was moving so quickly. “There were probably three or four moments where we were on autonomous mode at 90 miles an hour, and hands off the wheel,” and the road curved, Roy says. Where a trained driver would aim for the apex—the geometric center of the turn—to maintain speed and control, the car follows the lane lines. “If I hadn’t had my hands there, ready to take over, the car would have gone off the road and killed us.” He’s not annoyed by this, though. “That’s my fault for setting a speed faster than the system’s capable of compensating.”

If someone causes an accident by relying too heavily on Tesla’s system, Tesla may not get off the hook by saying, “Hey, we told ’em to be careful.”

 

Ref: Obviously Drivers Are Already Abusing Tesla’s Autopilot – Wired

Google’s Driverless Cars Run Into Problem: Cars With Drivers

Last month, as one of Google’s self-driving cars approached a crosswalk, it did what it was supposed to do when it slowed to allow a pedestrian to cross, prompting its “safety driver” to apply the brakes. The pedestrian was fine, but not so much Google’s car, which was hit from behind by a human-driven sedan.

Google’s fleet of autonomous test cars is programmed to follow the letter of the law. But it can be tough to get around if you are a stickler for the rules. One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go. The human drivers kept inching forward, looking for the advantage — paralyzing Google’s robot.

t is not just a Google issue. Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book. “The real problem is that the car is too safe,” said Donald Norman, director of the Design Lab at the University of California, San Diego, who studies autonomous vehicles.

“They have to learn to be aggressive in the right amount, and the right amount depends on the culture.”

[…]

Dmitri Dolgov, head of software for Google’s Self-Driving Car Project, said that one thing he had learned from the project was that human drivers needed to be “less idiotic.”

 

Ref: Google’s Driverless Cars Run Into Problem: Cars With Drivers – Times

Children Beating Up Robot Inspires New Escape Maneuver System

Now, a new study by a team of Japanese researchers shows that, in certain situations, children are actually horrible little brats may not be as empathetic towards robots as we’d previously thought, with gangs of unsupervised tykes repeatedly punching, kicking, and shaking a robot in a Japanese mall.

[…]

Next, they designed an abuse-evading algorithm to help the robot avoid situations where tiny humans might gang up on it. Literally tiny humans: the robot is programmed to run away from people who are below a certain height and escape in the direction of taller people. When it encounters a human, the system calculates the probability of abuse based on interaction time, pedestrian density, and the presence of people above or below 1.4 meters (4 feet 6 inches) in height. If the robot is statistically in danger, it changes its course towards a more crowded area or a taller person. This ensures that an adult is there to intervene when one of the little brats decides to pound the robot’s head with a bottle (which only happened a couple times).

Ref: Children Beating Up Robot Inspires New Escape Maneuver System – IEEE Spectrum

Hackers Can Disable a Sniper Rifle—Or Change Its Target

At the Black Hat hacker conference in two weeks, security researchers Runa Sandvik and Michael Auger plan to present the results of a year of work hacking a pair of $13,000 TrackingPoint self-aiming rifles. The married hacker couple have developed a set of techniques that could allow an attacker to compromise the rifle via its Wi-Fi connection and exploit vulnerabilities in its software. Their tricks can change variables in the scope’s calculations that make the rifle inexplicably miss its target, permanently disable the scope’s computer, or even prevent the gun from firing.

Ref: Hackers Can Disable a Sniper Rifle—Or Change Its Target – Wired

Hackers Remotely Kill a Jeep on the Highway

The Jeep’s strange behavior wasn’t entirely unexpected. I’d come to St. Louis to be Miller and Valasek’s digital crash-test dummy, a willing subject on whom they could test the car-hacking research they’d been doing over the past year. The result of their work was a hacking technique—what the security industry calls a zero-day exploit—that can target Jeep Cherokees and give the attacker wireless control, via the Internet, to any of thousands of vehicles. Their code is an automaker’s nightmare: software that lets hackers send commands through the Jeep’s entertainment system to its dashboard functions, steering, brakes, and transmission, all from a laptop that may be across the country.

[…]

Immediately my accelerator stopped working. As I frantically pressed the pedal and watched the RPMs climb, the Jeep lost half its speed, then slowed to a crawl. This occurred just as I reached a long overpass, with no shoulder to offer an escape. The experiment had ceased to be fun.

At that point, the interstate began to slope upward, so the Jeep lost more momentum and barely crept forward. Cars lined up behind my bumper before passing me, honking. I could see an 18-wheeler approaching in my rearview mirror. I hoped its driver saw me, too, and could tell I was paralyzed on the highway.

[…]

All of this is possible only because Chrysler, like practically all carmakers, is doing its best to turn the modern automobile into a smartphone. Uconnect, an Internet-connected computer feature in hundreds of thousands of Fiat Chrysler cars, SUVs, and trucks, controls the vehicle’s entertainment and navigation, enables phone calls, and even offers a Wi-Fi hot spot. And thanks to one vulnerable element, which Miller and Valasek won’t identify until their Black Hat talk, Uconnect’s cellular connection also lets anyone who knows the car’s IP address gain access from anywhere in the country. “From an attacker’s perspective, it’s a super nice vulnerability,” Miller says.

Ref: Hackers Remotely Kill a Jeep on the Highway—With Me in It – Wired

 

Should a Driverless Car Decide Who Lives or Dies?

The industry is promising a glittering future of autonomous vehicles moving in harmony like schools of fish. That can’t happen, however, until carmakers answer the kinds of thorny philosophical questions explored in science fiction since Isaac Asimov wrote his robot series last century. For example, should an autonomous vehicle sacrifice its occupant by swerving off a cliff to avoid killing a school bus full of children?

Auto executives, finding themselves in unfamiliar territory, have enlisted ethicists and philosophers to help them navigate the shades of gray. Ford, General Motors, Audi, Renault and Toyota are all beating a path to Stanford University’s Center for Automotive Research, which is programming cars to make ethical decisions and see what happens.

“This issue is definitely in the crosshairs,” says Chris Gerdes, who runs the lab and recently met with the chief executives of Ford and GM to discuss the topic. “They’re very aware of the issues and the challenges because their programmers are actively trying to make these decisions today.”

[…]

That’s why we shouldn’t leave those decisions up to robots, says Wendell Wallach, author of “A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control.”

“The way forward is to create an absolute principle that machines do not make life and death decisions,” says Wallach, a scholar at the Interdisciplinary Center for Bioethics at Yale University. “There has to be a human in the loop. You end up with a pretty lawless society if people think they won’t be held responsible for the actions they take.”

Ref: Should a Driverless Car Decide Who Lives or Dies? – Bloomsberg

This Artificial Intelligence Pioneer Has a Few Concerns

During my thesis research in the ’80s, I started thinking about rational decision-making and the problem that it’s actually impossible. If you were rational you would think: Here’s my current state, here are the actions I could do right now, and after that I can do those actions and then those actions and then those actions; which path is guaranteed to lead to my goal? The definition of rational behavior requires you to optimize over the entire future of the universe. It’s just completely infeasible computationally.

It didn’t make much sense that we should define what we’re trying to do in AI as something that’s impossible, so I tried to figure out: How do we really make decisions?

So, how do we do it?

One trick is to think about a short horizon and then guess what the rest of the future is going to look like. So chess programs, for example—if they were rational they would only play moves that guarantee checkmate, but they don’t do that. Instead they look ahead a dozen moves into the future and make a guess about how useful those states are, and then they choose a move that they hope leads to one of the good states.

“Could you prove that your systems can’t ever, no matter how smart they are, overwrite their original goals as set by the humans?”
Another thing that’s really essential is to think about the decision problem at multiple levels of abstraction, so “hierarchical decision making.” A person does roughly 20 trillion physical actions in their lifetime. Coming to this conference to give a talk works out to 1.3 billion or something. If you were rational you’d be trying to look ahead 1.3 billion steps—completely, absurdly impossible. So the way humans manage this is by having this very rich store of abstract, high-level actions. You don’t think, “First I can either move my left foot or my right foot, and then after that I can either…” You think, “I’ll go on Expedia and book a flight. When I land, I’ll take a taxi.” And that’s it. I don’t think about it anymore until I actually get off the plane at the airport and look for the sign that says “taxi”—then I get down into more detail. This is how we live our lives, basically. The future is spread out, with a lot of detail very close to us in time, but these big chunks where we’ve made commitments to very abstract actions, like, “get a Ph.D.,” “have children.”

What about differences in human values?

That’s an intrinsic problem. You could say machines should err on the side of doing nothing in areas where there’s a conflict of values. That might be difficult. I think we will have to build in these value functions. If you want to have a domestic robot in your house, it has to share a pretty good cross-section of human values; otherwise it’s going to do pretty stupid things, like put the cat in the oven for dinner because there’s no food in the fridge and the kids are hungry. Real life is full of these tradeoffs. If the machine makes these tradeoffs in ways that reveal that it just doesn’t get it—that it’s just missing some chunk of what’s obvious to humans—then you’re not going to want that thing in your house.

I don’t see any real way around the fact that there’s going to be, in some sense, a values industry. And I also think there’s a huge economic incentive to get it right. It only takes one or two things like a domestic robot putting the cat in the oven for dinner for people to lose confidence and not buy them.

You’ve argued that we need to be able to mathematically verify the behavior of AI under all possible circumstances. How would that work?

One of the difficulties people point to is that a system can arbitrarily produce a new version of itself that has different goals. That’s one of the scenarios that science fiction writers always talk about; somehow, the machine spontaneously gets this goal of defeating the human race. So the question is: Could you prove that your systems can’t ever, no matter how smart they are, overwrite their original goals as set by the humans?

It would be relatively easy to prove that the DQN system, as it’s written, could never change its goal of optimizing that score. Now, there is a hack that people talk about called “wire-heading” where you could actually go into the console of the Atari game and physically change the thing that produces the score on the screen. At the moment that’s not feasible for DQN, because its scope of action is entirely within the game itself; it doesn’t have a robot arm. But that’s a serious problem if the machine has a scope of action in the real world. So, could you prove that your system is designed in such a way that it could never change the mechanism by which the score is presented to it, even though it’s within its scope of action? That’s a more difficult proof.

Are there any advances in this direction that you think hold promise?

There’s an area emerging called “cyber-physical systems” about systems that couple computers to the real world. With a cyber-physical system, you’ve got a bunch of bits representing an air traffic control program, and then you’ve got some real airplanes, and what you care about is that no airplanes collide. You’re trying to prove a theorem about the combination of the bits and the physical world. What you would do is write a very conservative mathematical description of the physical world—airplanes can accelerate within such-and-such envelope—and your theorems would still be true in the real world as long as the real world is somewhere inside the envelope of behaviors.

Ref: This Artificial Intelligence Pioneer Has a Few Concerns – Wired

Feds Say That Banned Researcher Commandeered a Plane

Chris Roberts, a security researcher with One World Labs, told the FBI agent during an interview in February that he had hacked the in-flight entertainment system, or IFE, on an airplane and overwrote code on the plane’s Thrust Management Computer while aboard the flight. He was able to issue a climb command and make the plane briefly change course, the document states.

“He stated that he thereby caused one of the airplane engines to climb resulting in a lateral or sideways movement of the plane during one of these flights,” FBI Special Agent Mark Hurley wrote in his warrant application (.pdf). “He also stated that he used Vortex software after comprising/exploiting or ‘hacking’ the airplane’s networks. He used the software to monitor traffic from the cockpit system.”

[…]

He obtained physical access to the networks through the Seat Electronic Box, or SEB. These are installed two to a row, on each side of the aisle under passenger seats, on certain planes. After removing the cover to the SEB by “wiggling and Squeezing the box,” Roberts told agents he attached a Cat6 ethernet cable, with a modified connector, to the box and to his laptop and then used default IDs and passwords to gain access to the inflight entertainment system. Once on that network, he was able to gain access to other systems on the planes.

Ref: Feds Say That Banned Researcher Commandeered a Plane – Wired