The Cathedral of Computation

Here’s an exercise: The next time you hear someone talking about algorithms, replace the term with “God” and ask yourself if the meaning changes. Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers people have allowed to replace gods in their minds, even as they simultaneously claim that science has made us impervious to religion.

[…]Each generation, we reset a belief that we’ve reached the end of this chain of metaphors, even though history always proves us wrong precisely because there’s always another technology or trend offering a fresh metaphor. Indeed, an exceptionalism that favors the present is one of the ways that science has become theology.

[…]

The same could be said for data, the material algorithms operate upon. Data has become just as theologized as algorithms, especially “big data,” whose name is meant to elevate information to the level of celestial infinity. Today, conventional wisdom would suggest that mystical, ubiquitous sensors are collecting data by the terabyteful without our knowledge or intervention. Even if this is true to an extent, examples like Netflix’s altgenres show that data is created, not simply aggregated, and often by means of laborious, manual processes rather than anonymous vacuum-devices.

If algorithms aren’t gods, what are they instead? Like metaphors, algorithms are simplifications, or distortions. They are caricatures. They take a complex system from the world and abstract it into processes that capture some of that system’s logic and discard others. And they couple to other processes, machines, and materials that carry out the extra-computational part of their work.

Unfortunately, most computing systems don’t want to admit that they are burlesques. They want to be innovators, disruptors, world-changers, and such zeal requires sectarian blindness. The exception is games, which willingly admit that they are caricatures—and which suffer the consequences of this admission in the court of public opinion. Games know that they are faking it, which makes them less susceptible to theologization. SimCity isn’t an urban planning tool, it’s  a cartoon of urban planning. Imagine the folly of thinking otherwise! Yet, that’s precisely the belief people hold of Google and Facebook and the like.

Ref: The Cathedral of Computation – TheAtlantic

Insurgents Hack U.S. Drones

Militants in Iraq have used $26 off-the-shelf software to intercept live video feeds from U.S. Predator drones, potentially providing them with information they need to evade or monitor U.S. military operations.

Senior defense and intelligence officials said Iranian-backed insurgents intercepted the video feeds by taking advantage of an unprotected communications link in some of the remotely flown planes’ systems. Shiite fighters in Iraq used software programs such as SkyGrabber — available for as little as $25.95 on the Internet — to regularly capture drone video feeds, according to a person familiar with reports on the matter.

[…]

The drone intercepts mark the emergence of a shadow cyber war within the U.S.-led conflicts overseas. They also point to a potentially serious vulnerability in Washington’s growing network of unmanned drones, which have become the American weapon of choice in both Afghanistan and Pakistan.

[…]

Last December, U.S. military personnel in Iraq discovered copies of Predator drone feeds on a laptop belonging to a Shiite militant, according to a person familiar with reports on the matter. “There was evidence this was not a one-time deal,” this person said. The U.S. accuses Iran of providing weapons, money and training to Shiite fighters in Iraq, a charge that Tehran has long denied.

The militants use programs such as SkyGrabber, from Russian company SkySoftware. Andrew Solonikov, one of the software’s developers, said he was unaware that his software could be used to intercept drone feeds. “It was developed to intercept music, photos, video, programs and other content that other users download from the Internet — no military data or other commercial data, only free legal content,” he said by email from Russia.

Ref: Insurgents Hack U.S. Drones – WallStreetJournal

Researchers Plan to Demonstrate a Wireless Car Hack This Summer

At the Black Hat and Defcon security conferences this August, security researchers Charlie Miller and Chris Valasek have announced they plan to wirelessly hack the digital network of a car or truck. That network, known as the CAN bus, is the connected system of computers that influences everything from the vehicle’s horn and seat belts to its steering and brakes. And their upcoming public demonstrations may be the most definitive proof yet of cars’ vulnerability to remote attacks, the result of more than two years of work since Miller and Valasek first received a DARPA grant to investigate cars’ security in 2013.

“We will show the reality of car hacking by demonstrating exactly how a remote attack works against an unaltered, factory vehicle,” the hackers write in an abstract of their talk that appeared on the Black Hat website last week. “Starting with remote exploitation, we will show how to pivot through different pieces of the vehicle’s hardware in order to be able to send messages on the CAN bus to critical electronic control units. We will conclude by showing several CAN messages that affect physical systems of the vehicle.”

[…]

Some critics, including Toyota and Ford, argued at the time that a wired-in attack wasn’t exactly a full-blown hack. But Miller and Valasek have been working since then to prove that the same tricks can be pulled off wirelessly. In a talk at Black Hat last year, theypublished an analysis of 24 automobiles, rating which presented the most potential vulnerabilities to a hacker based on wireless attack points, network architecture and computerized control of key physical features. In that analysis, the Jeep Cherokee, Infiniti Q50 and Cadillac Escalade were rated as the most hackable vehicles they tested. The overall digital security of a car “depends on the architecture,” Valasek, director of vehicle security research at security firm IOActive told WIRED last year. “If you hack the radio, can you send messages to the brakes or the steering? And if you can, what can you do with them?”

Ref: Researchers Plan to Demonstrate a Wireless Car Hack This Summer – Wired

Americans Want Self-Driving Cars for the Cheaper Insurance

Of the 1,500 US drivers the Boston Group surveyed in September, 55 percent said they “likely” or “very likely” would buy a semi-autonomous car (one capable of handling some, but not all, highway and urban traffic). What’s more, 44 percent said they would, in 10 years, buy a fully autonomous vehicle.

What’s most surprising about the survey isn’t that so many people are interested in this technology, but why they’re interested.

The leading reason people are considering semi-autonomous vehicles isn’t greater safety, improved fuel efficiency, or increased productivity—the upsides most frequently associated with the technology. Such things were a factor, but the biggest appeal is lower insurance costs. Safety was the leading reason people were interested in a fully autonomous ride, with cheaper insurance costs in second place.

[…]

That’s why “a vast number of insurance companies” are exploring discounts for those semiautonomous features, Mosquet says. For example, drivers who purchase a new Volvo with the pedestrian protection tech qualify for a lower premium. “The cost to [the insurer] of pedestrian accidents is actually significant, and they’re going to do everything they can to reduce this type of incident.” That’s already started in Europe and is spreading to the US.

Ref: Americans Want Self-Driving Cars for the Cheaper Insurance – Wired

Geneva Meeting – Killer Robots

Many of the 120 states that are part of the Convention on Conventional Weapons (CCW) are participating in the 2015 CCW meeting of experts, which is chaired by Germany’s Ambassador Michael Biontino who has enlisted “friends of the chair” from Albania, Chile, Hungary, Finland, Sierra Leone, South Korea, Sri Lanka, and Switzerland to chair thematic sessions on a range of technical, legal, and overarching issues including ethics and human rights.

[…]

The session proved to be one of the most engaging this far at the 2015 experts meeting. The UK made a detailed intervention that included the statement that it “does not believe there would be any utility in a fully autonomous weapon system.” France said it has no plans for autonomous weapons that deploy fire, it relies entirely on humans for fire decisions.

Three states that have explicitly endorsed the call for a preemptive ban on lethal autonomous weapons systems have reiterated that goal at this meeting (Cuba, Ecuador, and Pakistan), while Sri Lanka said a prohibition must be considered. There have been numerous references to the CCW’s 1995 protocol banning blinding lasers, which preemptively banned the weapon before it was ever fielded or used.

Ref: Second multilateral meeting opens – CampaignsToStopKillerRobots

Can We Trust Robot Cars to Make Hard Choices?

However, as humans, we also do something else when faced with hard decisions: In particularly ambiguous situations, when no choice is obviously best, we choose and justify our decision with a reason. Most of the time we are not aware of this, but it comes out when we have to make particularly hard decisions.

[…]

Critically, she says, when we make our decision, we get to justify it with a reason.

Whether we prefer beige or fluorescent colors, the countryside or a certain set of job activities—these are not objectively measurable. There is no ranking system anywhere that says beige is better than pink and that living in the countryside is better than a certain job. If there were, all humans would be making the same decisions. Instead, we each invent reasons to make our decisions (and when societies do this together, we create our laws, social norms and ethical systems.)

But a machine could never do this…right? You’d be surprised. Google recently announced, for example, that it had built an AI that can learn and master video games. The program isn’t given commands but instead plays games again and again, learning from experience. Some have speculated that such a development would be useful for a robot car.

How might this work?

Instead of a robot car making a random decision, outsourcing its decision or reverting to pre-programmed values to make a decision—it could instead scour the cloud processing immense amounts of data and patterns based on local laws, past legal rulings, the values of the people and society around it, and the consequences it observes from various other similar decision-making processes over time.In short, robot cars, like humans, would use experience to invent their own reasons.

What is fascinating about Chang’s talk, is that she says when humans engage in such a reckoning process—of inventing and choosing one’s reasons during hard times—we view it as one of the highest forms of human development.

Asking others to make decisions for us, or leaving life to chance, is a form of drifting. But inventing and choosing our own reasons during hard times is referred to as building one’s character, taking a stand, taking responsibility for one’s own actions, defining who one is, and becoming the author of one’s own life.

 

Ref: Can We Trust Robot Cars to Make Hard Choices? – SingularityHub

Google and Elon Musk to Decide What Is Good for Humanity

THE RECENTLY PUBLISHED Future of Life Institute (FLI) letter “Research Priorities for Robust and Beneficial Artificial Intelligence,” signed by hundreds of AI researchers in addition to Elon Musk and Stephen Hawking, many representing government regulators and some sitting on committees with names like “Presidential Panel on Long Term AI future,” offers a program professing to protect the mankind from the threat of “super-intelligent AIs.”

[…]

Which brings me back to the FLI letter. While individual investors have every right to lose their assets, the problem gets much more complicated when government regulators are involved. Here are the the main claims of the letter I have a problem with (quotes from the letter in italics):

– Statements like: “There is a broad consensus that AI research is progressing steadily,” even “progressing dramatically” (Google Brain signatories on FLI web site), are just not true. In the last 50 years there has been very little AI progress (more stasis like than “steady”) and not a single major AI based breakthrough commercial product, unless you count iPhone’s infamous Siri. In short, despite the overwhelming media push, AI simply does not work.

– “AI systems must do what we want them to do” begs the question of who is “we?” There are 92 references included in this letter, all of them from CS, AI and political scientists, there are many references to approaching, civilization threatening “singularity,” several references to possibilities for “mind uploading,” but not a single reference from a biologist or a neural scientist. To call such an approach to study of intellect “interdisciplinary” is just not credible.

– “Identify research directions that can maximize societal benefits” is outright chilling. Again, who decides whether research is “socially desirable?”

– “AI super-intelligence will not act with human wishes and will threaten the humanity” is just a cover for justification of the attempted power grab of AI group over the competing approaches to study of intellect.

[…]

AI researchers, on the other hand, start with the a priori assumption that the brain is quite simple, really just a carbon version of a Von Neumann CPU. As Google Brain AI researcher and FLI letter signatory, Illya Sutskever, recently told me, “[The] brain absolutely is just a CPU and further study of brain would be a waste of my time.” This is almost word for word repetition of famous statement of Noam Chomsky made decades ago “predicting” the existence of a language “generator” in the brain.

FLI letter signatories say: Do not to worry, “we” will allow “good” AI and “identify research directions” in order to maximize societal benefits and eradicate diseases and poverty. I believe that it would be precisely the newly emerging neural science groups which would suffer if AI is allowed to regulate research direction in this field. Why should “evidence” like this allow AI scientists to control what biologists and neural scientists can and cannot do?

Ref: Google and Elon Musk to Decide What Is Good for Humanity – Wired

What Crazy Dash Cam Videos Teach Us About Self-Driving Cars

THE FIRST SELF-DRIVING CARS are expected to hit showrooms within five years. Their autonomous capabilities will be largely limited to highways, where there aren’t things like pedestrians and cyclists to deal with, and you won’t fully cede control. As long as the road is clear, the car’s in charge. But when all that computing power senses trouble, like construction or rough weather, it will have you take the wheel.

The problem is, that switch will not—because it cannot—happen immediately.

The primary benefit of autonomous technology is to increase safety and decrease congestion. A secondary upside to letting the car do the driving is letting you can focus on crafting pithy tweets, texting, or do anything else you’d rather be doing. And while any rules the feds concoct likely will prohibit catching Zs behind the wheel, there’s no arguing that someone won’t try it.

Audi’s testing has shown it takes an average of 3 to 7 seconds—and as long as 10—for a driver to snap to attention and take control, even when prompted by flashing lights and verbal warnings. This means engineers must ensure an autonomous Audi can handle any situation for at least that long. This is not insignificant, because a lot can happen in 10 seconds, especially when a vehicle is moving more than 100 feet per second.

[…]

The point is, the world’s highways are a crazy, unpredictable place where anything can happen. And they don’t even have the pedestrians and cyclists and buses and taxis and delivery vans and countless other things that make autonomous driving in an urban setting so tricky. So how do you prepare for every situation imaginable?

Ref: What Crazy Dash Cam Videos Teach Us About Self-Driving Cars  – Wired

The Ethical Dangers of AI

The AI community has begun to take the downside risk of AI very seriously. I attended a Future of AI workshop in January of 2015 in Puerto Rico sponsored by the Future of Life Institute. The ethical consequences of AI were front and center. There are four key thrusts the AI community is focusing research on to get better outcomes with future AIs:

Verification – Research into methods of guaranteeing that the systems we build actually meet the specifications we set.

Validation – Research into ensuring that the specifications, even if met, do not result in unwanted behaviors and consequences.

Security – Research on building systems that are increasingly difficult to tamper with – internally or externally.

Control – Research to ensure that we can interrupt AI systems (even with other AIs) if and when  something goes wrong, and get them back on track.

These aren’t just philosophical or ethical considerations, they are system design issues. I think we’ll see a greater focus on these kinds of issues not just in AI, but in software generally as we develop systems with more power and complexity.

Will AIs ever be completely risk free? I don’t think so. Humans are not risk free! There is a predator/prey aspect to this in terms of malicious groups who choose to develop these technologies in harmful ways. However, the vast majority of people, including researchers and developers in AI, are not malicious. Most of the world’s intellect and energy will be spent on building society up, not tearing it down. In spite of this, we need to do a better job anticipating the potential consequences of our technologies, and being proactive about creating the outcomes that improve human health and the environment. That is a particular challenge with AI technology that can improve itself. Meeting this challenge will make it much more likely that we can succeed in reaching for the stars.

Ref: Interview: Neil Jacobstein Discusses Future of Jobs, Universal Basic Income and the Ethical Dangers of AI – SingularityHub