Inside the Fake Town Built Just for Self-Driving Cars

“Mcity,” which officially opened Monday, is a 32-acre faux metropolis designed specifically to test automated and connected vehicle tech. It’s got several miles of two-, three-, and four-lane roads, complete with intersections, traffic signals, and signs. Benches and streetlights line the sidewalks separating building facades from the streets. It’s like an elaborate Hollywood set.

[…]

This is about more than safety, too. Mcity allows engineers to test a wide range of conditions that aren’t easily created in the wild. They can test vehicles on different surfaces (like brick, dirt, and grass) and see how their systems handle roundabouts and underpasses. They can erect construction barriers, spray graffiti on road signs, and work with faded lane lines, to see how autonomous tech reacts to real-world conditions.

[…]

Such a site is a great tool, but the technology must also prove itself on public roads. A simulated environment has a fundamental limitation: You can only test situations you think up. Experience—and dash cams—have taught us our roads can be crazy in ways we never think to expect. Sinkholes can appear in the road, tsunamis can rage across the land, roadside buildings can collapse and send debris flying. Humans can be even harder to anticipate. But even every day actions, the things we do almost subconsciously.

Ref: Inside the Fake Town Built Just for Self-Driving Cars – Wired

Should a Driverless Car Decide Who Lives or Dies?

The industry is promising a glittering future of autonomous vehicles moving in harmony like schools of fish. That can’t happen, however, until carmakers answer the kinds of thorny philosophical questions explored in science fiction since Isaac Asimov wrote his robot series last century. For example, should an autonomous vehicle sacrifice its occupant by swerving off a cliff to avoid killing a school bus full of children?

Auto executives, finding themselves in unfamiliar territory, have enlisted ethicists and philosophers to help them navigate the shades of gray. Ford, General Motors, Audi, Renault and Toyota are all beating a path to Stanford University’s Center for Automotive Research, which is programming cars to make ethical decisions and see what happens.

“This issue is definitely in the crosshairs,” says Chris Gerdes, who runs the lab and recently met with the chief executives of Ford and GM to discuss the topic. “They’re very aware of the issues and the challenges because their programmers are actively trying to make these decisions today.”

[…]

That’s why we shouldn’t leave those decisions up to robots, says Wendell Wallach, author of “A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control.”

“The way forward is to create an absolute principle that machines do not make life and death decisions,” says Wallach, a scholar at the Interdisciplinary Center for Bioethics at Yale University. “There has to be a human in the loop. You end up with a pretty lawless society if people think they won’t be held responsible for the actions they take.”

Ref: Should a Driverless Car Decide Who Lives or Dies? – Bloomsberg

Was This Psychedelic Image Made by Man or Machine?

 

The image features a hybrid panoply of squirrels, slugs, dogs and tiny horse legs as well as fractal sequences of houses, cars, and streets—and a lot of eyes. Currently, convolutional neural networks are trained primarily for facial recognition purposes—once algorithmically calculated to a specific degree, the CNN can match up similar images in a database with a suggested vector input.

Since being released, the image has been met with skepticism on Reddit. Users are weighing in with polarized comments; some are convinced that the image is simply an elaborate hoax by a visual (human) artist. Others argue that the multiplicity of the eyes and patterns in robotically logical but visually discordant structures are typical of an algorithm making sense of a command, supplementing their arguments with CNN image classification papers with previous, similar visual examples.

Ref: Was This Psychedelic Image Made by Man or Machine? – Creators Project

Natural Police

To defeat corruption, we need to understand why it arises in the first place. For that, we need game theory. A ‘game’ is a stylised scenario in which each player receives a pay‑off determined by the strategies chosen by all players. There’s also a variant of game theory that deals with so-called evolutionary games. In that kind of scenario, we imagine a population of self-reproducing strategies that get to multiply depending on the pay‑offs they achieve. A strategy is said to be ‘evolutionarily stable’ if, once it is widely adopted, no rival can spread by natural selection.

The archetypal co‑operation game is the Prisoner’s Dilemma. Imagine that two prisoners, each held in isolation, are given a chance to rat on the other. If only one takes the bait, he gets a reduced prison sentence while the other gets a longer one. But if both take it, neither gets a reduction. In other words, mutual co‑operation (saying nothing) provides a higher reward than mutual defection (ratting on your partner), but the best reward comes from defecting while your partner tries to co‑operate with you, while the lowest pay‑off comes from trying to co‑operate with your partner while he stabs you in the back.

The most obvious evolutionarily stable strategy in this game is simple: always defect. If your partner co‑operates, you exploit his naïveté, and if he defects, you will still do better than if you had co‑operated. So there is no possible strategy that can defeat the principle ‘always act like an untrusting jerk’.

At this point, you could be forgiven for thinking that game theory is both appalling and ridiculous. Co‑operation clearly pays off. Indeed, if you make normal people (ie people who are not economics students) play the Prisoner’s Dilemma, they almost never defect. And not just people. Rats will go out of their way to free a trapped cage-mate; rhesus monkeys will starve for days rather than shock a companion. Even bacteria are capable of supreme acts of altruism.

This trend toward biological niceness has been something of an embarrassment for biology. In fact, the task of finding ways around the more dismal conclusions of game theory has become a sub-disciplinary cottage industry. In the Prisoner’s Dilemma, for example, it turns out that when players are allowed to form relationships, co‑operators can beat defectors simply by avoiding them. That’s fine in small societies, but it leaves us with the problem of co‑operation in large groups, where interactions among strangers are inevitable.

Game theory (as well as common sense) tells us that policing can help. Just grant some individuals the power and inclination to punish defectors and the attractions of cheating immediately look less compelling. This is a good first pass at a solution: not for nothing do we find police-like entities among ants, bees, wasps, and within our own bodies. But that just leads us back to the problem of corruption. What happens if the police themselves become criminals, using their unusual powers for private profit? Who watches the watchers?

In 2010, two researchers at the University of Tennessee built a game-theoretical model to examine just this problem. The results, published by Francisco Úbeda and Edgar Duéñez-Guzmán in a paper called ‘Power and Corruption’, were, frankly, depressing. Nothing, they concluded, would stop corruption from dominating an evolving police system. Once it arose, it would remain stable under almost any circumstances. The only silver lining was that the bad police could still suppress defection in the rest of society. The result was a mixed population of gullible sheep and hypocritical overlords. Net wellbeing does end up somewhat higher than it would be if everyone acted entirely selfishly, but all in all you end up with a society rather like that of the tree wasps.

Ref: Natural police – Aeon

This Artificial Intelligence Pioneer Has a Few Concerns

During my thesis research in the ’80s, I started thinking about rational decision-making and the problem that it’s actually impossible. If you were rational you would think: Here’s my current state, here are the actions I could do right now, and after that I can do those actions and then those actions and then those actions; which path is guaranteed to lead to my goal? The definition of rational behavior requires you to optimize over the entire future of the universe. It’s just completely infeasible computationally.

It didn’t make much sense that we should define what we’re trying to do in AI as something that’s impossible, so I tried to figure out: How do we really make decisions?

So, how do we do it?

One trick is to think about a short horizon and then guess what the rest of the future is going to look like. So chess programs, for example—if they were rational they would only play moves that guarantee checkmate, but they don’t do that. Instead they look ahead a dozen moves into the future and make a guess about how useful those states are, and then they choose a move that they hope leads to one of the good states.

“Could you prove that your systems can’t ever, no matter how smart they are, overwrite their original goals as set by the humans?”
Another thing that’s really essential is to think about the decision problem at multiple levels of abstraction, so “hierarchical decision making.” A person does roughly 20 trillion physical actions in their lifetime. Coming to this conference to give a talk works out to 1.3 billion or something. If you were rational you’d be trying to look ahead 1.3 billion steps—completely, absurdly impossible. So the way humans manage this is by having this very rich store of abstract, high-level actions. You don’t think, “First I can either move my left foot or my right foot, and then after that I can either…” You think, “I’ll go on Expedia and book a flight. When I land, I’ll take a taxi.” And that’s it. I don’t think about it anymore until I actually get off the plane at the airport and look for the sign that says “taxi”—then I get down into more detail. This is how we live our lives, basically. The future is spread out, with a lot of detail very close to us in time, but these big chunks where we’ve made commitments to very abstract actions, like, “get a Ph.D.,” “have children.”

What about differences in human values?

That’s an intrinsic problem. You could say machines should err on the side of doing nothing in areas where there’s a conflict of values. That might be difficult. I think we will have to build in these value functions. If you want to have a domestic robot in your house, it has to share a pretty good cross-section of human values; otherwise it’s going to do pretty stupid things, like put the cat in the oven for dinner because there’s no food in the fridge and the kids are hungry. Real life is full of these tradeoffs. If the machine makes these tradeoffs in ways that reveal that it just doesn’t get it—that it’s just missing some chunk of what’s obvious to humans—then you’re not going to want that thing in your house.

I don’t see any real way around the fact that there’s going to be, in some sense, a values industry. And I also think there’s a huge economic incentive to get it right. It only takes one or two things like a domestic robot putting the cat in the oven for dinner for people to lose confidence and not buy them.

You’ve argued that we need to be able to mathematically verify the behavior of AI under all possible circumstances. How would that work?

One of the difficulties people point to is that a system can arbitrarily produce a new version of itself that has different goals. That’s one of the scenarios that science fiction writers always talk about; somehow, the machine spontaneously gets this goal of defeating the human race. So the question is: Could you prove that your systems can’t ever, no matter how smart they are, overwrite their original goals as set by the humans?

It would be relatively easy to prove that the DQN system, as it’s written, could never change its goal of optimizing that score. Now, there is a hack that people talk about called “wire-heading” where you could actually go into the console of the Atari game and physically change the thing that produces the score on the screen. At the moment that’s not feasible for DQN, because its scope of action is entirely within the game itself; it doesn’t have a robot arm. But that’s a serious problem if the machine has a scope of action in the real world. So, could you prove that your system is designed in such a way that it could never change the mechanism by which the score is presented to it, even though it’s within its scope of action? That’s a more difficult proof.

Are there any advances in this direction that you think hold promise?

There’s an area emerging called “cyber-physical systems” about systems that couple computers to the real world. With a cyber-physical system, you’ve got a bunch of bits representing an air traffic control program, and then you’ve got some real airplanes, and what you care about is that no airplanes collide. You’re trying to prove a theorem about the combination of the bits and the physical world. What you would do is write a very conservative mathematical description of the physical world—airplanes can accelerate within such-and-such envelope—and your theorems would still be true in the real world as long as the real world is somewhere inside the envelope of behaviors.

Ref: This Artificial Intelligence Pioneer Has a Few Concerns – Wired

Google’s Plan to Eliminate Human Driving in 5 Years

There are three significant downsides to Google’s approach. First, the goal of delivering a car that only drives itself raises the difficulty bar. There’s no human backup, so the car had better be able to handle every situation it encounters. That’s what Google calls “the .001 percent of things that we need to be prepared for even if we’ve never seen them before in our real world driving.” And if dash cam videos teach us anything, it’s that our roads are crazy places. People jump onto highways. Cows fall out of trucks. Tsunamis strike and buildings explode.

The automakers have to deal with those same edge cases, and the human may not be of much help in a split second situation. But the timeline is different: Automakers acknowledge this problem, but they’re moving slowly and carefully. Google plans to have everything figured out in just a few years, which makes the challenge that much harder to overcome.

[…]

The deadly crash of Asiana Airlines Flight 214 at San Francisco International Airport in July 2013 highlights a lesson from the aviation industry. The airport’s glide scope indicator, which helps line up the plane for landing, wasn’t functioning, so the pilots were told to use visual approaches. The crew was experienced and skilled, but rarely flew the Boeing 777 manually,Bloomberg reported. The plane came in far too low and slow, hitting the seawall that separates the airport from the bay. The pilots “mismanaged the airplane’s descent,” the National Transportation Safety Board found.

Asiana, in turn, blamed badly designed software. “There were inconsistencies in the aircraft’s automation logic that led to the unexpected disabling of airspeed protection without adequate warning to the flight crew,” it said in a filing to the NTSB. “The low airspeed alerting system did not provide adequate time for recovery; and air traffic control instructions and procedures led to an excessive pilot workload during the final approach.”

Ref: Google’s Plan to Eliminate Human Driving in 5 Years – Wired

Variable World: Bay Model Tour & Salon

What’s also tricky here is the scale, finding a way to accurately grasp the forces going on within the model, and outside it of course. It’s easy to prototype and model something at 1:1, but this one is 1:1000, at least in the horizontal plane. So when you think about granularity within computer models that simulate climate, especially in the Bay area, there are micro-climates everywhere that are often collapsed or ignored simplistically. How do you create a realistic model that represents, and can project what’s actually happening? It’s challenging to integrate that into the computer model. The digital model only has a certain resolution and you can throw situations at it and sometimes get the expected results that verify your projections, and maybe the knock-on effects seem right, but really how it works in nature is often surprisingly divergent from the model. The state of the art climate model is really just the one that best conforms to a limited set of validation data. And based on these models we make determinations of risk for insurance purposes; they influence policy making and ultimately come back to us as decision making tools. I am fascinated by the scale of decision making we base on models and the latent potential for contingency.

It also seems that variability is an important part in how authority gets built up in the model. I’m remembering the moment when we were standing there and the guide said, “This is a perfect world, it doesn’t change.” She emphasized that a few times.

Ref: Variable World: Bay Model Tour & Salon – AVANT

Feds Say That Banned Researcher Commandeered a Plane

Chris Roberts, a security researcher with One World Labs, told the FBI agent during an interview in February that he had hacked the in-flight entertainment system, or IFE, on an airplane and overwrote code on the plane’s Thrust Management Computer while aboard the flight. He was able to issue a climb command and make the plane briefly change course, the document states.

“He stated that he thereby caused one of the airplane engines to climb resulting in a lateral or sideways movement of the plane during one of these flights,” FBI Special Agent Mark Hurley wrote in his warrant application (.pdf). “He also stated that he used Vortex software after comprising/exploiting or ‘hacking’ the airplane’s networks. He used the software to monitor traffic from the cockpit system.”

[…]

He obtained physical access to the networks through the Seat Electronic Box, or SEB. These are installed two to a row, on each side of the aisle under passenger seats, on certain planes. After removing the cover to the SEB by “wiggling and Squeezing the box,” Roberts told agents he attached a Cat6 ethernet cable, with a modified connector, to the box and to his laptop and then used default IDs and passwords to gain access to the inflight entertainment system. Once on that network, he was able to gain access to other systems on the planes.

Ref: Feds Say That Banned Researcher Commandeered a Plane – Wired