The Jeep’s strange behavior wasn’t entirely unexpected. I’d come to St. Louis to be Miller and Valasek’s digital crash-test dummy, a willing subject on whom they could test the car-hacking research they’d been doing over the past year. The result of their work was a hacking technique—what the security industry calls a zero-day exploit—that can target Jeep Cherokees and give the attacker wireless control, via the Internet, to any of thousands of vehicles. Their code is an automaker’s nightmare: software that lets hackers send commands through the Jeep’s entertainment system to its dashboard functions, steering, brakes, and transmission, all from a laptop that may be across the country.
Immediately my accelerator stopped working. As I frantically pressed the pedal and watched the RPMs climb, the Jeep lost half its speed, then slowed to a crawl. This occurred just as I reached a long overpass, with no shoulder to offer an escape. The experiment had ceased to be fun.
At that point, the interstate began to slope upward, so the Jeep lost more momentum and barely crept forward. Cars lined up behind my bumper before passing me, honking. I could see an 18-wheeler approaching in my rearview mirror. I hoped its driver saw me, too, and could tell I was paralyzed on the highway.
All of this is possible only because Chrysler, like practically all carmakers, is doing its best to turn the modern automobile into a smartphone. Uconnect, an Internet-connected computer feature in hundreds of thousands of Fiat Chrysler cars, SUVs, and trucks, controls the vehicle’s entertainment and navigation, enables phone calls, and even offers a Wi-Fi hot spot. And thanks to one vulnerable element, which Miller and Valasek won’t identify until their Black Hat talk, Uconnect’s cellular connection also lets anyone who knows the car’s IP address gain access from anywhere in the country. “From an attacker’s perspective, it’s a super nice vulnerability,” Miller says.
Ref: Hackers Remotely Kill a Jeep on the Highway—With Me in It – Wired
“Mcity,” which officially opened Monday, is a 32-acre faux metropolis designed specifically to test automated and connected vehicle tech. It’s got several miles of two-, three-, and four-lane roads, complete with intersections, traffic signals, and signs. Benches and streetlights line the sidewalks separating building facades from the streets. It’s like an elaborate Hollywood set.
This is about more than safety, too. Mcity allows engineers to test a wide range of conditions that aren’t easily created in the wild. They can test vehicles on different surfaces (like brick, dirt, and grass) and see how their systems handle roundabouts and underpasses. They can erect construction barriers, spray graffiti on road signs, and work with faded lane lines, to see how autonomous tech reacts to real-world conditions.
Such a site is a great tool, but the technology must also prove itself on public roads. A simulated environment has a fundamental limitation: You can only test situations you think up. Experience—and dash cams—have taught us our roads can be crazy in ways we never think to expect. Sinkholes can appear in the road, tsunamis can rage across the land, roadside buildings can collapse and send debris flying. Humans can be even harder to anticipate. But even every day actions, the things we do almost subconsciously.
Ref: Inside the Fake Town Built Just for Self-Driving Cars – Wired
The industry is promising a glittering future of autonomous vehicles moving in harmony like schools of fish. That can’t happen, however, until carmakers answer the kinds of thorny philosophical questions explored in science fiction since Isaac Asimov wrote his robot series last century. For example, should an autonomous vehicle sacrifice its occupant by swerving off a cliff to avoid killing a school bus full of children?
Auto executives, finding themselves in unfamiliar territory, have enlisted ethicists and philosophers to help them navigate the shades of gray. Ford, General Motors, Audi, Renault and Toyota are all beating a path to Stanford University’s Center for Automotive Research, which is programming cars to make ethical decisions and see what happens.
“This issue is definitely in the crosshairs,” says Chris Gerdes, who runs the lab and recently met with the chief executives of Ford and GM to discuss the topic. “They’re very aware of the issues and the challenges because their programmers are actively trying to make these decisions today.”
That’s why we shouldn’t leave those decisions up to robots, says Wendell Wallach, author of “A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control.”
“The way forward is to create an absolute principle that machines do not make life and death decisions,” says Wallach, a scholar at the Interdisciplinary Center for Bioethics at Yale University. “There has to be a human in the loop. You end up with a pretty lawless society if people think they won’t be held responsible for the actions they take.”
Ref: Should a Driverless Car Decide Who Lives or Dies? – Bloomsberg
The image features a hybrid panoply of squirrels, slugs, dogs and tiny horse legs as well as fractal sequences of houses, cars, and streets—and a lot of eyes. Currently, convolutional neural networks are trained primarily for facial recognition purposes—once algorithmically calculated to a specific degree, the CNN can match up similar images in a database with a suggested vector input.
Since being released, the image has been met with skepticism on Reddit. Users are weighing in with polarized comments; some are convinced that the image is simply an elaborate hoax by a visual (human) artist. Others argue that the multiplicity of the eyes and patterns in robotically logical but visually discordant structures are typical of an algorithm making sense of a command, supplementing their arguments with CNN image classification papers with previous, similar visual examples.
Ref: Was This Psychedelic Image Made by Man or Machine? – Creators Project