Arriving in New York in record time, without being arrested or killed, is a personal victory for the drivers. More than that, though, it highlights how quickly and enthusiastically autonomous technology is likely to be adopted, and how tricky it may be to keep in check once drivers get their first taste of freedom behind the wheel.
Autopilot caused a few scares, Roy says, largely because the car was moving so quickly. “There were probably three or four moments where we were on autonomous mode at 90 miles an hour, and hands off the wheel,” and the road curved, Roy says. Where a trained driver would aim for the apex—the geometric center of the turn—to maintain speed and control, the car follows the lane lines. “If I hadn’t had my hands there, ready to take over, the car would have gone off the road and killed us.” He’s not annoyed by this, though. “That’s my fault for setting a speed faster than the system’s capable of compensating.”
If someone causes an accident by relying too heavily on Tesla’s system, Tesla may not get off the hook by saying, “Hey, we told ’em to be careful.”
Ref: Obviously Drivers Are Already Abusing Tesla’s Autopilot – Wired
The Jeep’s strange behavior wasn’t entirely unexpected. I’d come to St. Louis to be Miller and Valasek’s digital crash-test dummy, a willing subject on whom they could test the car-hacking research they’d been doing over the past year. The result of their work was a hacking technique—what the security industry calls a zero-day exploit—that can target Jeep Cherokees and give the attacker wireless control, via the Internet, to any of thousands of vehicles. Their code is an automaker’s nightmare: software that lets hackers send commands through the Jeep’s entertainment system to its dashboard functions, steering, brakes, and transmission, all from a laptop that may be across the country.
Immediately my accelerator stopped working. As I frantically pressed the pedal and watched the RPMs climb, the Jeep lost half its speed, then slowed to a crawl. This occurred just as I reached a long overpass, with no shoulder to offer an escape. The experiment had ceased to be fun.
At that point, the interstate began to slope upward, so the Jeep lost more momentum and barely crept forward. Cars lined up behind my bumper before passing me, honking. I could see an 18-wheeler approaching in my rearview mirror. I hoped its driver saw me, too, and could tell I was paralyzed on the highway.
All of this is possible only because Chrysler, like practically all carmakers, is doing its best to turn the modern automobile into a smartphone. Uconnect, an Internet-connected computer feature in hundreds of thousands of Fiat Chrysler cars, SUVs, and trucks, controls the vehicle’s entertainment and navigation, enables phone calls, and even offers a Wi-Fi hot spot. And thanks to one vulnerable element, which Miller and Valasek won’t identify until their Black Hat talk, Uconnect’s cellular connection also lets anyone who knows the car’s IP address gain access from anywhere in the country. “From an attacker’s perspective, it’s a super nice vulnerability,” Miller says.
Ref: Hackers Remotely Kill a Jeep on the Highway—With Me in It – Wired
The image features a hybrid panoply of squirrels, slugs, dogs and tiny horse legs as well as fractal sequences of houses, cars, and streets—and a lot of eyes. Currently, convolutional neural networks are trained primarily for facial recognition purposes—once algorithmically calculated to a specific degree, the CNN can match up similar images in a database with a suggested vector input.
Since being released, the image has been met with skepticism on Reddit. Users are weighing in with polarized comments; some are convinced that the image is simply an elaborate hoax by a visual (human) artist. Others argue that the multiplicity of the eyes and patterns in robotically logical but visually discordant structures are typical of an algorithm making sense of a command, supplementing their arguments with CNN image classification papers with previous, similar visual examples.
Ref: Was This Psychedelic Image Made by Man or Machine? – Creators Project
Chris Roberts, a security researcher with One World Labs, told the FBI agent during an interview in February that he had hacked the in-flight entertainment system, or IFE, on an airplane and overwrote code on the plane’s Thrust Management Computer while aboard the flight. He was able to issue a climb command and make the plane briefly change course, the document states.
“He stated that he thereby caused one of the airplane engines to climb resulting in a lateral or sideways movement of the plane during one of these flights,” FBI Special Agent Mark Hurley wrote in his warrant application (.pdf). “He also stated that he used Vortex software after comprising/exploiting or ‘hacking’ the airplane’s networks. He used the software to monitor traffic from the cockpit system.”
He obtained physical access to the networks through the Seat Electronic Box, or SEB. These are installed two to a row, on each side of the aisle under passenger seats, on certain planes. After removing the cover to the SEB by “wiggling and Squeezing the box,” Roberts told agents he attached a Cat6 ethernet cable, with a modified connector, to the box and to his laptop and then used default IDs and passwords to gain access to the inflight entertainment system. Once on that network, he was able to gain access to other systems on the planes.
Ref: Feds Say That Banned Researcher Commandeered a Plane – Wired
Militants in Iraq have used $26 off-the-shelf software to intercept live video feeds from U.S. Predator drones, potentially providing them with information they need to evade or monitor U.S. military operations.
Senior defense and intelligence officials said Iranian-backed insurgents intercepted the video feeds by taking advantage of an unprotected communications link in some of the remotely flown planes’ systems. Shiite fighters in Iraq used software programs such as SkyGrabber — available for as little as $25.95 on the Internet — to regularly capture drone video feeds, according to a person familiar with reports on the matter.
The drone intercepts mark the emergence of a shadow cyber war within the U.S.-led conflicts overseas. They also point to a potentially serious vulnerability in Washington’s growing network of unmanned drones, which have become the American weapon of choice in both Afghanistan and Pakistan.
Last December, U.S. military personnel in Iraq discovered copies of Predator drone feeds on a laptop belonging to a Shiite militant, according to a person familiar with reports on the matter. “There was evidence this was not a one-time deal,” this person said. The U.S. accuses Iran of providing weapons, money and training to Shiite fighters in Iraq, a charge that Tehran has long denied.
The militants use programs such as SkyGrabber, from Russian company SkySoftware. Andrew Solonikov, one of the software’s developers, said he was unaware that his software could be used to intercept drone feeds. “It was developed to intercept music, photos, video, programs and other content that other users download from the Internet — no military data or other commercial data, only free legal content,” he said by email from Russia.
Ref: Insurgents Hack U.S. Drones – WallStreetJournal
At the Black Hat and Defcon security conferences this August, security researchers Charlie Miller and Chris Valasek have announced they plan to wirelessly hack the digital network of a car or truck. That network, known as the CAN bus, is the connected system of computers that influences everything from the vehicle’s horn and seat belts to its steering and brakes. And their upcoming public demonstrations may be the most definitive proof yet of cars’ vulnerability to remote attacks, the result of more than two years of work since Miller and Valasek first received a DARPA grant to investigate cars’ security in 2013.
“We will show the reality of car hacking by demonstrating exactly how a remote attack works against an unaltered, factory vehicle,” the hackers write in an abstract of their talk that appeared on the Black Hat website last week. “Starting with remote exploitation, we will show how to pivot through different pieces of the vehicle’s hardware in order to be able to send messages on the CAN bus to critical electronic control units. We will conclude by showing several CAN messages that affect physical systems of the vehicle.”
Some critics, including Toyota and Ford, argued at the time that a wired-in attack wasn’t exactly a full-blown hack. But Miller and Valasek have been working since then to prove that the same tricks can be pulled off wirelessly. In a talk at Black Hat last year, theypublished an analysis of 24 automobiles, rating which presented the most potential vulnerabilities to a hacker based on wireless attack points, network architecture and computerized control of key physical features. In that analysis, the Jeep Cherokee, Infiniti Q50 and Cadillac Escalade were rated as the most hackable vehicles they tested. The overall digital security of a car “depends on the architecture,” Valasek, director of vehicle security research at security firm IOActive told WIRED last year. “If you hack the radio, can you send messages to the brakes or the steering? And if you can, what can you do with them?”
Ref: Researchers Plan to Demonstrate a Wireless Car Hack This Summer – Wired
The AI community has begun to take the downside risk of AI very seriously. I attended a Future of AI workshop in January of 2015 in Puerto Rico sponsored by the Future of Life Institute. The ethical consequences of AI were front and center. There are four key thrusts the AI community is focusing research on to get better outcomes with future AIs:
Verification – Research into methods of guaranteeing that the systems we build actually meet the specifications we set.
Validation – Research into ensuring that the specifications, even if met, do not result in unwanted behaviors and consequences.
Security – Research on building systems that are increasingly difficult to tamper with – internally or externally.
Control – Research to ensure that we can interrupt AI systems (even with other AIs) if and when something goes wrong, and get them back on track.
These aren’t just philosophical or ethical considerations, they are system design issues. I think we’ll see a greater focus on these kinds of issues not just in AI, but in software generally as we develop systems with more power and complexity.
Will AIs ever be completely risk free? I don’t think so. Humans are not risk free! There is a predator/prey aspect to this in terms of malicious groups who choose to develop these technologies in harmful ways. However, the vast majority of people, including researchers and developers in AI, are not malicious. Most of the world’s intellect and energy will be spent on building society up, not tearing it down. In spite of this, we need to do a better job anticipating the potential consequences of our technologies, and being proactive about creating the outcomes that improve human health and the environment. That is a particular challenge with AI technology that can improve itself. Meeting this challenge will make it much more likely that we can succeed in reaching for the stars.
Ref: Interview: Neil Jacobstein Discusses Future of Jobs, Universal Basic Income and the Ethical Dangers of AI – SingularityHub