Category Archives: W – now

The CyberSyn Revolution

The state plays an important role in shaping the relationship between labor and technology, and can push for the design of systems that benefit ordinary people. It can also have the opposite effect. Indeed, the history of computing in the US context has been tightly linked to government command, control, and automation efforts.

But it does not have to be this way. Consider how the Allende government approached the technology-labor question in the design of Project Cybersyn. Allende made raising employment central both to his economic plan and his overall strategy to help Chileans. His government pushed for new forms of worker participation on the shop floor and the integration of worker knowledge in economic decision-making.

This political environment allowed Beer, the British cybernetician assisting Chile, to view computer technology as a way to empower workers. In 1972, he published a report for the Chilean government that proposed giving Chilean workers, not managers or government technocrats, control of Project Cybersyn. More radically, Beer envisioned a way for Chile’s workers to participate in Cybersyn’s design.

He recommended that the government allow workers — not engineers — to build the models of the state-controlled factories because they were best qualified to understand operations on the shop floor. Workers would thus help design the system that they would then run and use. Allowing workers to use both their heads and their hands would limit how alienated they felt from their labor.

[…]

But Beer showed an ability to envision how computerization in a factory setting might work toward an end other than speed-ups and deskilling — the results of capitalist development that labor scholars such as Harry Braverman witnessed in the United States, where the government did not have the same commitment to actively limiting unemployment or encouraging worker participation.

[…]

We need to be thinking in terms of systems rather than technological quick fixes. Discussions about smart cities, for example, regularly focus on better network infrastructures and the use of information and communication technologies such as integrated sensors, mobile phone apps, and online services. Often, the underlying assumption is that such interventions will automatically improve the quality of urban life by making it easier for residents to access government services and provide city government with data to improve city maintenance.

But this technological determinism doesn’t offer a holistic understanding of how such technologies might negatively impact critical aspects of city life. For example, the sociologist Robert Hollands argues that tech-centered smart-city initiatives might create an influx of technologically literate workers and exacerbate the displacement of other workers. They also might divert city resources to the building of computer infrastructures and away from other important areas of city life.

[…]

We must resist the kind of apolitical “innovation determinism” that sees the creation of the next app, online service, or networked device as the best way to move society forward. Instead, we should push ourselves to think creatively of ways to change the structure of our organizations, political processes, and societies for the better and about how new technologies might contribute to such efforts.

 

Ref: The Cybersyn Revolution – Jacobin

NSA’s Skynet

As The Intercept reports today, the NSA does have a program called Skynet. But unlike the autonomous, self-aware computerized defense system inTerminator that goes rogue and launches a nuclear attack that destroys most of humanity, this one is a surveillance program that uses phone metadata to track the location and call activities of suspected terrorists. A journalist for Al Jazeera reportedly became one of its targets after he was placed on a terrorist watch list.

[…]

Ahmad Muaffaq Zaidan, bureau chief for Al Jazeera’s Islamabad office, got tracked by Skynet after he was identified by US intelligence as a possible Al Qaeda member and assigned a watch list number. A Syrian national, Zaidan has scored a number of exclusive interviews with senior Al Qaeda leaders, including Osama bin Laden himself.

Skynet uses phone location and call metadata from bulk phone call records to detect suspicious patterns in the physical movements of suspects and their communication habits, according to a 2012 government presentation The Intercept obtained from Edward Snowden.

The presentation indicates that Skynet looks for terrorist connections based on questions such as “who has traveled from Peshawar to Faisalabad or Lahore (and back) in the past month? Who does the traveler call when he arrives?” It also looks for suspicious behaviors such as someone who engages in “excessive SIM or handset swapping” or receives “incoming calls only.”

The goal is to identify people who move around in a pattern similar to Al Qaeda couriers who are used to pass communication and intelligence between the group’s senior leaders. The program tracked Zaidan because his movements and interactions with Al Qaeda and Taliban leaders matched a suspicious pattern—which is, it turns out, very similar to the pattern of journalists meeting with sources.

Ref: So, the NSA Has an Actual Skynet Program – Wired

Insurgents Hack U.S. Drones

Militants in Iraq have used $26 off-the-shelf software to intercept live video feeds from U.S. Predator drones, potentially providing them with information they need to evade or monitor U.S. military operations.

Senior defense and intelligence officials said Iranian-backed insurgents intercepted the video feeds by taking advantage of an unprotected communications link in some of the remotely flown planes’ systems. Shiite fighters in Iraq used software programs such as SkyGrabber — available for as little as $25.95 on the Internet — to regularly capture drone video feeds, according to a person familiar with reports on the matter.

[…]

The drone intercepts mark the emergence of a shadow cyber war within the U.S.-led conflicts overseas. They also point to a potentially serious vulnerability in Washington’s growing network of unmanned drones, which have become the American weapon of choice in both Afghanistan and Pakistan.

[…]

Last December, U.S. military personnel in Iraq discovered copies of Predator drone feeds on a laptop belonging to a Shiite militant, according to a person familiar with reports on the matter. “There was evidence this was not a one-time deal,” this person said. The U.S. accuses Iran of providing weapons, money and training to Shiite fighters in Iraq, a charge that Tehran has long denied.

The militants use programs such as SkyGrabber, from Russian company SkySoftware. Andrew Solonikov, one of the software’s developers, said he was unaware that his software could be used to intercept drone feeds. “It was developed to intercept music, photos, video, programs and other content that other users download from the Internet — no military data or other commercial data, only free legal content,” he said by email from Russia.

Ref: Insurgents Hack U.S. Drones – WallStreetJournal

Researchers Plan to Demonstrate a Wireless Car Hack This Summer

At the Black Hat and Defcon security conferences this August, security researchers Charlie Miller and Chris Valasek have announced they plan to wirelessly hack the digital network of a car or truck. That network, known as the CAN bus, is the connected system of computers that influences everything from the vehicle’s horn and seat belts to its steering and brakes. And their upcoming public demonstrations may be the most definitive proof yet of cars’ vulnerability to remote attacks, the result of more than two years of work since Miller and Valasek first received a DARPA grant to investigate cars’ security in 2013.

“We will show the reality of car hacking by demonstrating exactly how a remote attack works against an unaltered, factory vehicle,” the hackers write in an abstract of their talk that appeared on the Black Hat website last week. “Starting with remote exploitation, we will show how to pivot through different pieces of the vehicle’s hardware in order to be able to send messages on the CAN bus to critical electronic control units. We will conclude by showing several CAN messages that affect physical systems of the vehicle.”

[…]

Some critics, including Toyota and Ford, argued at the time that a wired-in attack wasn’t exactly a full-blown hack. But Miller and Valasek have been working since then to prove that the same tricks can be pulled off wirelessly. In a talk at Black Hat last year, theypublished an analysis of 24 automobiles, rating which presented the most potential vulnerabilities to a hacker based on wireless attack points, network architecture and computerized control of key physical features. In that analysis, the Jeep Cherokee, Infiniti Q50 and Cadillac Escalade were rated as the most hackable vehicles they tested. The overall digital security of a car “depends on the architecture,” Valasek, director of vehicle security research at security firm IOActive told WIRED last year. “If you hack the radio, can you send messages to the brakes or the steering? And if you can, what can you do with them?”

Ref: Researchers Plan to Demonstrate a Wireless Car Hack This Summer – Wired

Can We Trust Robot Cars to Make Hard Choices?

However, as humans, we also do something else when faced with hard decisions: In particularly ambiguous situations, when no choice is obviously best, we choose and justify our decision with a reason. Most of the time we are not aware of this, but it comes out when we have to make particularly hard decisions.

[…]

Critically, she says, when we make our decision, we get to justify it with a reason.

Whether we prefer beige or fluorescent colors, the countryside or a certain set of job activities—these are not objectively measurable. There is no ranking system anywhere that says beige is better than pink and that living in the countryside is better than a certain job. If there were, all humans would be making the same decisions. Instead, we each invent reasons to make our decisions (and when societies do this together, we create our laws, social norms and ethical systems.)

But a machine could never do this…right? You’d be surprised. Google recently announced, for example, that it had built an AI that can learn and master video games. The program isn’t given commands but instead plays games again and again, learning from experience. Some have speculated that such a development would be useful for a robot car.

How might this work?

Instead of a robot car making a random decision, outsourcing its decision or reverting to pre-programmed values to make a decision—it could instead scour the cloud processing immense amounts of data and patterns based on local laws, past legal rulings, the values of the people and society around it, and the consequences it observes from various other similar decision-making processes over time.In short, robot cars, like humans, would use experience to invent their own reasons.

What is fascinating about Chang’s talk, is that she says when humans engage in such a reckoning process—of inventing and choosing one’s reasons during hard times—we view it as one of the highest forms of human development.

Asking others to make decisions for us, or leaving life to chance, is a form of drifting. But inventing and choosing our own reasons during hard times is referred to as building one’s character, taking a stand, taking responsibility for one’s own actions, defining who one is, and becoming the author of one’s own life.

 

Ref: Can We Trust Robot Cars to Make Hard Choices? – SingularityHub

Google and Elon Musk to Decide What Is Good for Humanity

THE RECENTLY PUBLISHED Future of Life Institute (FLI) letter “Research Priorities for Robust and Beneficial Artificial Intelligence,” signed by hundreds of AI researchers in addition to Elon Musk and Stephen Hawking, many representing government regulators and some sitting on committees with names like “Presidential Panel on Long Term AI future,” offers a program professing to protect the mankind from the threat of “super-intelligent AIs.”

[…]

Which brings me back to the FLI letter. While individual investors have every right to lose their assets, the problem gets much more complicated when government regulators are involved. Here are the the main claims of the letter I have a problem with (quotes from the letter in italics):

– Statements like: “There is a broad consensus that AI research is progressing steadily,” even “progressing dramatically” (Google Brain signatories on FLI web site), are just not true. In the last 50 years there has been very little AI progress (more stasis like than “steady”) and not a single major AI based breakthrough commercial product, unless you count iPhone’s infamous Siri. In short, despite the overwhelming media push, AI simply does not work.

– “AI systems must do what we want them to do” begs the question of who is “we?” There are 92 references included in this letter, all of them from CS, AI and political scientists, there are many references to approaching, civilization threatening “singularity,” several references to possibilities for “mind uploading,” but not a single reference from a biologist or a neural scientist. To call such an approach to study of intellect “interdisciplinary” is just not credible.

– “Identify research directions that can maximize societal benefits” is outright chilling. Again, who decides whether research is “socially desirable?”

– “AI super-intelligence will not act with human wishes and will threaten the humanity” is just a cover for justification of the attempted power grab of AI group over the competing approaches to study of intellect.

[…]

AI researchers, on the other hand, start with the a priori assumption that the brain is quite simple, really just a carbon version of a Von Neumann CPU. As Google Brain AI researcher and FLI letter signatory, Illya Sutskever, recently told me, “[The] brain absolutely is just a CPU and further study of brain would be a waste of my time.” This is almost word for word repetition of famous statement of Noam Chomsky made decades ago “predicting” the existence of a language “generator” in the brain.

FLI letter signatories say: Do not to worry, “we” will allow “good” AI and “identify research directions” in order to maximize societal benefits and eradicate diseases and poverty. I believe that it would be precisely the newly emerging neural science groups which would suffer if AI is allowed to regulate research direction in this field. Why should “evidence” like this allow AI scientists to control what biologists and neural scientists can and cannot do?

Ref: Google and Elon Musk to Decide What Is Good for Humanity – Wired

What Crazy Dash Cam Videos Teach Us About Self-Driving Cars

THE FIRST SELF-DRIVING CARS are expected to hit showrooms within five years. Their autonomous capabilities will be largely limited to highways, where there aren’t things like pedestrians and cyclists to deal with, and you won’t fully cede control. As long as the road is clear, the car’s in charge. But when all that computing power senses trouble, like construction or rough weather, it will have you take the wheel.

The problem is, that switch will not—because it cannot—happen immediately.

The primary benefit of autonomous technology is to increase safety and decrease congestion. A secondary upside to letting the car do the driving is letting you can focus on crafting pithy tweets, texting, or do anything else you’d rather be doing. And while any rules the feds concoct likely will prohibit catching Zs behind the wheel, there’s no arguing that someone won’t try it.

Audi’s testing has shown it takes an average of 3 to 7 seconds—and as long as 10—for a driver to snap to attention and take control, even when prompted by flashing lights and verbal warnings. This means engineers must ensure an autonomous Audi can handle any situation for at least that long. This is not insignificant, because a lot can happen in 10 seconds, especially when a vehicle is moving more than 100 feet per second.

[…]

The point is, the world’s highways are a crazy, unpredictable place where anything can happen. And they don’t even have the pedestrians and cyclists and buses and taxis and delivery vans and countless other things that make autonomous driving in an urban setting so tricky. So how do you prepare for every situation imaginable?

Ref: What Crazy Dash Cam Videos Teach Us About Self-Driving Cars  – Wired

Death by Robot

Ronald Arkin, a roboticist at Georgia Tech, has received grants from the military to study how to equip robots with a set of moral rules. “My main goal is to reduce the number of noncombatant casualties in warfare,” he says. His lab developed what he calls an “ethical adapter” that helps the robot emulate guilt. It’s set in motion when the program detects a difference between how much destruction is expected when using a particular weapon and how much actually occurs. If the difference is too great, the robot’s guilt level reaches a certain threshold, and it stops using the weapon. Arkin says robots sometimes won’t be able to parse more complicated situations in which the right answer isn’t a simple shoot/don’t shoot decision. But on balance, he says, they will make fewer mistakes than humans, whose battlefield behavior is often clouded by panic, confusion or fear.

A robot’s lack of emotion is precisely what makes many people uncomfortable with the idea of trying to give it human characteristics. Death by robot is an undignified death, Peter Asaro, an affiliate scholar at the Center for Internet and Society at Stanford Law School, said in a speech in May at a United Nations conference on conventional weapons in Geneva. A machine “is not capable of considering the value of those human lives” that it is about to end, he told the group. “And if they’re not capable of that and we allow them to kill people under the law, then we all lose dignity, in the way that if we permit slavery, it’s not just the suffering of those whoareslaves butallof humanity that suffers the indignity that there are any slaves at all.” The U.N.willtakeupquestions about the uses of autonomous weapons again in April.

 

Ref: Death by Robot – NY Times