Natural Police

To defeat corruption, we need to understand why it arises in the first place. For that, we need game theory. A ‘game’ is a stylised scenario in which each player receives a pay‑off determined by the strategies chosen by all players. There’s also a variant of game theory that deals with so-called evolutionary games. In that kind of scenario, we imagine a population of self-reproducing strategies that get to multiply depending on the pay‑offs they achieve. A strategy is said to be ‘evolutionarily stable’ if, once it is widely adopted, no rival can spread by natural selection.

The archetypal co‑operation game is the Prisoner’s Dilemma. Imagine that two prisoners, each held in isolation, are given a chance to rat on the other. If only one takes the bait, he gets a reduced prison sentence while the other gets a longer one. But if both take it, neither gets a reduction. In other words, mutual co‑operation (saying nothing) provides a higher reward than mutual defection (ratting on your partner), but the best reward comes from defecting while your partner tries to co‑operate with you, while the lowest pay‑off comes from trying to co‑operate with your partner while he stabs you in the back.

The most obvious evolutionarily stable strategy in this game is simple: always defect. If your partner co‑operates, you exploit his naïveté, and if he defects, you will still do better than if you had co‑operated. So there is no possible strategy that can defeat the principle ‘always act like an untrusting jerk’.

At this point, you could be forgiven for thinking that game theory is both appalling and ridiculous. Co‑operation clearly pays off. Indeed, if you make normal people (ie people who are not economics students) play the Prisoner’s Dilemma, they almost never defect. And not just people. Rats will go out of their way to free a trapped cage-mate; rhesus monkeys will starve for days rather than shock a companion. Even bacteria are capable of supreme acts of altruism.

This trend toward biological niceness has been something of an embarrassment for biology. In fact, the task of finding ways around the more dismal conclusions of game theory has become a sub-disciplinary cottage industry. In the Prisoner’s Dilemma, for example, it turns out that when players are allowed to form relationships, co‑operators can beat defectors simply by avoiding them. That’s fine in small societies, but it leaves us with the problem of co‑operation in large groups, where interactions among strangers are inevitable.

Game theory (as well as common sense) tells us that policing can help. Just grant some individuals the power and inclination to punish defectors and the attractions of cheating immediately look less compelling. This is a good first pass at a solution: not for nothing do we find police-like entities among ants, bees, wasps, and within our own bodies. But that just leads us back to the problem of corruption. What happens if the police themselves become criminals, using their unusual powers for private profit? Who watches the watchers?

In 2010, two researchers at the University of Tennessee built a game-theoretical model to examine just this problem. The results, published by Francisco Úbeda and Edgar Duéñez-Guzmán in a paper called ‘Power and Corruption’, were, frankly, depressing. Nothing, they concluded, would stop corruption from dominating an evolving police system. Once it arose, it would remain stable under almost any circumstances. The only silver lining was that the bad police could still suppress defection in the rest of society. The result was a mixed population of gullible sheep and hypocritical overlords. Net wellbeing does end up somewhat higher than it would be if everyone acted entirely selfishly, but all in all you end up with a society rather like that of the tree wasps.

Ref: Natural police – Aeon

This Artificial Intelligence Pioneer Has a Few Concerns

During my thesis research in the ’80s, I started thinking about rational decision-making and the problem that it’s actually impossible. If you were rational you would think: Here’s my current state, here are the actions I could do right now, and after that I can do those actions and then those actions and then those actions; which path is guaranteed to lead to my goal? The definition of rational behavior requires you to optimize over the entire future of the universe. It’s just completely infeasible computationally.

It didn’t make much sense that we should define what we’re trying to do in AI as something that’s impossible, so I tried to figure out: How do we really make decisions?

So, how do we do it?

One trick is to think about a short horizon and then guess what the rest of the future is going to look like. So chess programs, for example—if they were rational they would only play moves that guarantee checkmate, but they don’t do that. Instead they look ahead a dozen moves into the future and make a guess about how useful those states are, and then they choose a move that they hope leads to one of the good states.

“Could you prove that your systems can’t ever, no matter how smart they are, overwrite their original goals as set by the humans?”
Another thing that’s really essential is to think about the decision problem at multiple levels of abstraction, so “hierarchical decision making.” A person does roughly 20 trillion physical actions in their lifetime. Coming to this conference to give a talk works out to 1.3 billion or something. If you were rational you’d be trying to look ahead 1.3 billion steps—completely, absurdly impossible. So the way humans manage this is by having this very rich store of abstract, high-level actions. You don’t think, “First I can either move my left foot or my right foot, and then after that I can either…” You think, “I’ll go on Expedia and book a flight. When I land, I’ll take a taxi.” And that’s it. I don’t think about it anymore until I actually get off the plane at the airport and look for the sign that says “taxi”—then I get down into more detail. This is how we live our lives, basically. The future is spread out, with a lot of detail very close to us in time, but these big chunks where we’ve made commitments to very abstract actions, like, “get a Ph.D.,” “have children.”

What about differences in human values?

That’s an intrinsic problem. You could say machines should err on the side of doing nothing in areas where there’s a conflict of values. That might be difficult. I think we will have to build in these value functions. If you want to have a domestic robot in your house, it has to share a pretty good cross-section of human values; otherwise it’s going to do pretty stupid things, like put the cat in the oven for dinner because there’s no food in the fridge and the kids are hungry. Real life is full of these tradeoffs. If the machine makes these tradeoffs in ways that reveal that it just doesn’t get it—that it’s just missing some chunk of what’s obvious to humans—then you’re not going to want that thing in your house.

I don’t see any real way around the fact that there’s going to be, in some sense, a values industry. And I also think there’s a huge economic incentive to get it right. It only takes one or two things like a domestic robot putting the cat in the oven for dinner for people to lose confidence and not buy them.

You’ve argued that we need to be able to mathematically verify the behavior of AI under all possible circumstances. How would that work?

One of the difficulties people point to is that a system can arbitrarily produce a new version of itself that has different goals. That’s one of the scenarios that science fiction writers always talk about; somehow, the machine spontaneously gets this goal of defeating the human race. So the question is: Could you prove that your systems can’t ever, no matter how smart they are, overwrite their original goals as set by the humans?

It would be relatively easy to prove that the DQN system, as it’s written, could never change its goal of optimizing that score. Now, there is a hack that people talk about called “wire-heading” where you could actually go into the console of the Atari game and physically change the thing that produces the score on the screen. At the moment that’s not feasible for DQN, because its scope of action is entirely within the game itself; it doesn’t have a robot arm. But that’s a serious problem if the machine has a scope of action in the real world. So, could you prove that your system is designed in such a way that it could never change the mechanism by which the score is presented to it, even though it’s within its scope of action? That’s a more difficult proof.

Are there any advances in this direction that you think hold promise?

There’s an area emerging called “cyber-physical systems” about systems that couple computers to the real world. With a cyber-physical system, you’ve got a bunch of bits representing an air traffic control program, and then you’ve got some real airplanes, and what you care about is that no airplanes collide. You’re trying to prove a theorem about the combination of the bits and the physical world. What you would do is write a very conservative mathematical description of the physical world—airplanes can accelerate within such-and-such envelope—and your theorems would still be true in the real world as long as the real world is somewhere inside the envelope of behaviors.

Ref: This Artificial Intelligence Pioneer Has a Few Concerns – Wired

Google’s Plan to Eliminate Human Driving in 5 Years

There are three significant downsides to Google’s approach. First, the goal of delivering a car that only drives itself raises the difficulty bar. There’s no human backup, so the car had better be able to handle every situation it encounters. That’s what Google calls “the .001 percent of things that we need to be prepared for even if we’ve never seen them before in our real world driving.” And if dash cam videos teach us anything, it’s that our roads are crazy places. People jump onto highways. Cows fall out of trucks. Tsunamis strike and buildings explode.

The automakers have to deal with those same edge cases, and the human may not be of much help in a split second situation. But the timeline is different: Automakers acknowledge this problem, but they’re moving slowly and carefully. Google plans to have everything figured out in just a few years, which makes the challenge that much harder to overcome.

[…]

The deadly crash of Asiana Airlines Flight 214 at San Francisco International Airport in July 2013 highlights a lesson from the aviation industry. The airport’s glide scope indicator, which helps line up the plane for landing, wasn’t functioning, so the pilots were told to use visual approaches. The crew was experienced and skilled, but rarely flew the Boeing 777 manually,Bloomberg reported. The plane came in far too low and slow, hitting the seawall that separates the airport from the bay. The pilots “mismanaged the airplane’s descent,” the National Transportation Safety Board found.

Asiana, in turn, blamed badly designed software. “There were inconsistencies in the aircraft’s automation logic that led to the unexpected disabling of airspeed protection without adequate warning to the flight crew,” it said in a filing to the NTSB. “The low airspeed alerting system did not provide adequate time for recovery; and air traffic control instructions and procedures led to an excessive pilot workload during the final approach.”

Ref: Google’s Plan to Eliminate Human Driving in 5 Years – Wired

Variable World: Bay Model Tour & Salon

What’s also tricky here is the scale, finding a way to accurately grasp the forces going on within the model, and outside it of course. It’s easy to prototype and model something at 1:1, but this one is 1:1000, at least in the horizontal plane. So when you think about granularity within computer models that simulate climate, especially in the Bay area, there are micro-climates everywhere that are often collapsed or ignored simplistically. How do you create a realistic model that represents, and can project what’s actually happening? It’s challenging to integrate that into the computer model. The digital model only has a certain resolution and you can throw situations at it and sometimes get the expected results that verify your projections, and maybe the knock-on effects seem right, but really how it works in nature is often surprisingly divergent from the model. The state of the art climate model is really just the one that best conforms to a limited set of validation data. And based on these models we make determinations of risk for insurance purposes; they influence policy making and ultimately come back to us as decision making tools. I am fascinated by the scale of decision making we base on models and the latent potential for contingency.

It also seems that variability is an important part in how authority gets built up in the model. I’m remembering the moment when we were standing there and the guide said, “This is a perfect world, it doesn’t change.” She emphasized that a few times.

Ref: Variable World: Bay Model Tour & Salon – AVANT

Feds Say That Banned Researcher Commandeered a Plane

Chris Roberts, a security researcher with One World Labs, told the FBI agent during an interview in February that he had hacked the in-flight entertainment system, or IFE, on an airplane and overwrote code on the plane’s Thrust Management Computer while aboard the flight. He was able to issue a climb command and make the plane briefly change course, the document states.

“He stated that he thereby caused one of the airplane engines to climb resulting in a lateral or sideways movement of the plane during one of these flights,” FBI Special Agent Mark Hurley wrote in his warrant application (.pdf). “He also stated that he used Vortex software after comprising/exploiting or ‘hacking’ the airplane’s networks. He used the software to monitor traffic from the cockpit system.”

[…]

He obtained physical access to the networks through the Seat Electronic Box, or SEB. These are installed two to a row, on each side of the aisle under passenger seats, on certain planes. After removing the cover to the SEB by “wiggling and Squeezing the box,” Roberts told agents he attached a Cat6 ethernet cable, with a modified connector, to the box and to his laptop and then used default IDs and passwords to gain access to the inflight entertainment system. Once on that network, he was able to gain access to other systems on the planes.

Ref: Feds Say That Banned Researcher Commandeered a Plane – Wired

The CyberSyn Revolution

The state plays an important role in shaping the relationship between labor and technology, and can push for the design of systems that benefit ordinary people. It can also have the opposite effect. Indeed, the history of computing in the US context has been tightly linked to government command, control, and automation efforts.

But it does not have to be this way. Consider how the Allende government approached the technology-labor question in the design of Project Cybersyn. Allende made raising employment central both to his economic plan and his overall strategy to help Chileans. His government pushed for new forms of worker participation on the shop floor and the integration of worker knowledge in economic decision-making.

This political environment allowed Beer, the British cybernetician assisting Chile, to view computer technology as a way to empower workers. In 1972, he published a report for the Chilean government that proposed giving Chilean workers, not managers or government technocrats, control of Project Cybersyn. More radically, Beer envisioned a way for Chile’s workers to participate in Cybersyn’s design.

He recommended that the government allow workers — not engineers — to build the models of the state-controlled factories because they were best qualified to understand operations on the shop floor. Workers would thus help design the system that they would then run and use. Allowing workers to use both their heads and their hands would limit how alienated they felt from their labor.

[…]

But Beer showed an ability to envision how computerization in a factory setting might work toward an end other than speed-ups and deskilling — the results of capitalist development that labor scholars such as Harry Braverman witnessed in the United States, where the government did not have the same commitment to actively limiting unemployment or encouraging worker participation.

[…]

We need to be thinking in terms of systems rather than technological quick fixes. Discussions about smart cities, for example, regularly focus on better network infrastructures and the use of information and communication technologies such as integrated sensors, mobile phone apps, and online services. Often, the underlying assumption is that such interventions will automatically improve the quality of urban life by making it easier for residents to access government services and provide city government with data to improve city maintenance.

But this technological determinism doesn’t offer a holistic understanding of how such technologies might negatively impact critical aspects of city life. For example, the sociologist Robert Hollands argues that tech-centered smart-city initiatives might create an influx of technologically literate workers and exacerbate the displacement of other workers. They also might divert city resources to the building of computer infrastructures and away from other important areas of city life.

[…]

We must resist the kind of apolitical “innovation determinism” that sees the creation of the next app, online service, or networked device as the best way to move society forward. Instead, we should push ourselves to think creatively of ways to change the structure of our organizations, political processes, and societies for the better and about how new technologies might contribute to such efforts.

 

Ref: The Cybersyn Revolution – Jacobin

The World’s First Self-Driving Semi-Truck Hits the Road

The truck in question is the Freightliner Inspiration, a teched-up version of the Daimler 18-wheeler sold around the world. And according to Daimler, which owns Mercedes-Benz, it will make long-haul road transportation safer, cheaper, and better for the planet.

[…]

Humans Don’t Want These Jobs

Another point in favor of giving robots control is the serious and worsening shortage of humans willing to take the wheel. The lack of qualified drivers has created a “capacity crisis,” according to an October 2014 report by the American Transportation Research Institute. The American Trucking Associations predicts the industry could be short 240,000 drivers by 2022. (There are roughly three million full-time drivers in the US.)

[…]

Killing the Human Driver

The way to handle that growth isn’t to convince more people to become long haul truckers. It’s to reduce, and eventually eliminate, the role of the human. Let the trucks drive themselves, and you can improve safety, meet increased demand, and save time and fuel.

The safety benefits of autonomous features are obvious. The machine doesn’t get tired, stressed, angry, or distracted. And because trucks spend the vast majority of their time on the highway, the tech doesn’t have to clear the toughest hurdle: handling complex urban environments with pedestrians, cyclists, and the like. If you can prove the vehicles are safer, you could make them bigger, and thus more efficient at transporting all the crap we buy on Amazon.

[…]

The end game is eliminating the need for human drivers, at least for highway driving. (An autonomous truck could exit the interstate near the end of its journey, park in a designated lot, and wait for a human to come drive it on surface streets to its destination.)

// Interesting comments

The reason for the driver shortage is partly due to pay and benefits. If you want a driver to be away from his/her family for weeks at a time you have to pay them enough to make it worth the loss of family time. Also partly due to unrealistic expectations for delivery times by dispatchers, which adds a lot of stress to a job that already has enough of that already. So yeah I can see where companies would love a driverless semi because it would eliminate them having to consider the human/personal considerations. I haul fuel locally so not much chance of this technology replacing me, but I hate to see more jobs lost.

There are 3.5 million truck drivers in US alone, not to mention countless other transportation related jobs. Those are mostly average to decent paying jobs. Think for a second about the far reaching consequences of elimination of these jobs and secondary jobs that are also related. Further we are looking at elimination most any human-related job in the next 25 years. Do you truly feel it is a good-progress? Is it humane and progressive to live in a world where less than one 0.1% of people enslaves the rest?

Ferguson or Baltimore is not a fluke…it’s not just about racial tension. It is a fabric of our society starting to tear. Where people feel powerless and disenfranchised, the only option to be heard thats left is often violence. Whats happening there is just a beginning of what is about to come next.

One thing that has always bothered me is they always say “A truck can’t stop as fast as as a car can”, and yet we accept that excuse for a ratio of tires, weight, and lives lost due to inadequate breaking. Everything has improved, but we have stopped making progress in trying to stop a loaded truck faster.

Imagine telling the public the truth. It’s too expensive to add tires to cut breaking distance, or haul lighter loads. (or use trains).

 

Ref: The World’s First Self-Driving Semi-Truck Hits the Road – Wired

NSA’s Skynet

As The Intercept reports today, the NSA does have a program called Skynet. But unlike the autonomous, self-aware computerized defense system inTerminator that goes rogue and launches a nuclear attack that destroys most of humanity, this one is a surveillance program that uses phone metadata to track the location and call activities of suspected terrorists. A journalist for Al Jazeera reportedly became one of its targets after he was placed on a terrorist watch list.

[…]

Ahmad Muaffaq Zaidan, bureau chief for Al Jazeera’s Islamabad office, got tracked by Skynet after he was identified by US intelligence as a possible Al Qaeda member and assigned a watch list number. A Syrian national, Zaidan has scored a number of exclusive interviews with senior Al Qaeda leaders, including Osama bin Laden himself.

Skynet uses phone location and call metadata from bulk phone call records to detect suspicious patterns in the physical movements of suspects and their communication habits, according to a 2012 government presentation The Intercept obtained from Edward Snowden.

The presentation indicates that Skynet looks for terrorist connections based on questions such as “who has traveled from Peshawar to Faisalabad or Lahore (and back) in the past month? Who does the traveler call when he arrives?” It also looks for suspicious behaviors such as someone who engages in “excessive SIM or handset swapping” or receives “incoming calls only.”

The goal is to identify people who move around in a pattern similar to Al Qaeda couriers who are used to pass communication and intelligence between the group’s senior leaders. The program tracked Zaidan because his movements and interactions with Al Qaeda and Taliban leaders matched a suspicious pattern—which is, it turns out, very similar to the pattern of journalists meeting with sources.

Ref: So, the NSA Has an Actual Skynet Program – Wired