Category Archives: T – politic

The CyberSyn Revolution

The state plays an important role in shaping the relationship between labor and technology, and can push for the design of systems that benefit ordinary people. It can also have the opposite effect. Indeed, the history of computing in the US context has been tightly linked to government command, control, and automation efforts.

But it does not have to be this way. Consider how the Allende government approached the technology-labor question in the design of Project Cybersyn. Allende made raising employment central both to his economic plan and his overall strategy to help Chileans. His government pushed for new forms of worker participation on the shop floor and the integration of worker knowledge in economic decision-making.

This political environment allowed Beer, the British cybernetician assisting Chile, to view computer technology as a way to empower workers. In 1972, he published a report for the Chilean government that proposed giving Chilean workers, not managers or government technocrats, control of Project Cybersyn. More radically, Beer envisioned a way for Chile’s workers to participate in Cybersyn’s design.

He recommended that the government allow workers — not engineers — to build the models of the state-controlled factories because they were best qualified to understand operations on the shop floor. Workers would thus help design the system that they would then run and use. Allowing workers to use both their heads and their hands would limit how alienated they felt from their labor.

[…]

But Beer showed an ability to envision how computerization in a factory setting might work toward an end other than speed-ups and deskilling — the results of capitalist development that labor scholars such as Harry Braverman witnessed in the United States, where the government did not have the same commitment to actively limiting unemployment or encouraging worker participation.

[…]

We need to be thinking in terms of systems rather than technological quick fixes. Discussions about smart cities, for example, regularly focus on better network infrastructures and the use of information and communication technologies such as integrated sensors, mobile phone apps, and online services. Often, the underlying assumption is that such interventions will automatically improve the quality of urban life by making it easier for residents to access government services and provide city government with data to improve city maintenance.

But this technological determinism doesn’t offer a holistic understanding of how such technologies might negatively impact critical aspects of city life. For example, the sociologist Robert Hollands argues that tech-centered smart-city initiatives might create an influx of technologically literate workers and exacerbate the displacement of other workers. They also might divert city resources to the building of computer infrastructures and away from other important areas of city life.

[…]

We must resist the kind of apolitical “innovation determinism” that sees the creation of the next app, online service, or networked device as the best way to move society forward. Instead, we should push ourselves to think creatively of ways to change the structure of our organizations, political processes, and societies for the better and about how new technologies might contribute to such efforts.

 

Ref: The Cybersyn Revolution – Jacobin

Google and Elon Musk to Decide What Is Good for Humanity

THE RECENTLY PUBLISHED Future of Life Institute (FLI) letter “Research Priorities for Robust and Beneficial Artificial Intelligence,” signed by hundreds of AI researchers in addition to Elon Musk and Stephen Hawking, many representing government regulators and some sitting on committees with names like “Presidential Panel on Long Term AI future,” offers a program professing to protect the mankind from the threat of “super-intelligent AIs.”

[…]

Which brings me back to the FLI letter. While individual investors have every right to lose their assets, the problem gets much more complicated when government regulators are involved. Here are the the main claims of the letter I have a problem with (quotes from the letter in italics):

– Statements like: “There is a broad consensus that AI research is progressing steadily,” even “progressing dramatically” (Google Brain signatories on FLI web site), are just not true. In the last 50 years there has been very little AI progress (more stasis like than “steady”) and not a single major AI based breakthrough commercial product, unless you count iPhone’s infamous Siri. In short, despite the overwhelming media push, AI simply does not work.

– “AI systems must do what we want them to do” begs the question of who is “we?” There are 92 references included in this letter, all of them from CS, AI and political scientists, there are many references to approaching, civilization threatening “singularity,” several references to possibilities for “mind uploading,” but not a single reference from a biologist or a neural scientist. To call such an approach to study of intellect “interdisciplinary” is just not credible.

– “Identify research directions that can maximize societal benefits” is outright chilling. Again, who decides whether research is “socially desirable?”

– “AI super-intelligence will not act with human wishes and will threaten the humanity” is just a cover for justification of the attempted power grab of AI group over the competing approaches to study of intellect.

[…]

AI researchers, on the other hand, start with the a priori assumption that the brain is quite simple, really just a carbon version of a Von Neumann CPU. As Google Brain AI researcher and FLI letter signatory, Illya Sutskever, recently told me, “[The] brain absolutely is just a CPU and further study of brain would be a waste of my time.” This is almost word for word repetition of famous statement of Noam Chomsky made decades ago “predicting” the existence of a language “generator” in the brain.

FLI letter signatories say: Do not to worry, “we” will allow “good” AI and “identify research directions” in order to maximize societal benefits and eradicate diseases and poverty. I believe that it would be precisely the newly emerging neural science groups which would suffer if AI is allowed to regulate research direction in this field. Why should “evidence” like this allow AI scientists to control what biologists and neural scientists can and cannot do?

Ref: Google and Elon Musk to Decide What Is Good for Humanity – Wired

CyberSyn – The origin of the Big Data Nation

 

That was a challenge: the Chilean government was running low on cash and supplies; the United States, dismayed by Allende’s nationalization campaign, was doing its best to cut Chile off. And so a certain amount of improvisation was necessary. Four screens could show hundreds of pictures and figures at the touch of a button, delivering historical and statistical information about production—the Datafeed—but the screen displays had to be drawn (and redrawn) by hand, a job performed by four young female graphic designers. […] In addition to the Datafeed, there was a screen that simulated the future state of the Chilean economy under various conditions. Before you set prices, established production quotas, or shifted petroleum allocations, you could see how your decision would play out.

One wall was reserved for Project Cyberfolk, an ambitious effort to track the real-time happiness of the entire Chilean nation in response to decisions made in the op room. Beer built a device that would enable the country’s citizens, from their living rooms, to move a pointer on a voltmeter-like dial that indicated moods ranging from extreme unhappiness to complete bliss. The plan was to connect these devices to a network—it would ride on the existing TV networks—so that the total national happiness at any moment in time could be determined. The algedonic meter, as the device was called (from the Greekalgos, “pain,” and hedone, “pleasure”), would measure only raw pleasure-or-pain reactions to show whether government policies were working.

[…]

“The on-line control computer ought to be sensorily coupled to events in real time,” Beer argued in a 1964 lecture that presaged the arrival of smart, net-connected devices—the so-called Internet of Things. Given early notice, the workers could probably solve most of their own problems. Everyone would gain from computers: workers would enjoy more autonomy while managers would find the time for long-term planning. For Allende, this was good socialism. For Beer, this was good cybernetics.

[…]

Suppose that the state planners wanted the plant to expand its cooking capacity by twenty per cent. The modelling would determine whether the target was plausible. Say the existing boiler was used at ninety per cent of capacity, and increasing the amount of canned fruit would mean exceeding that capacity by fifty per cent. With these figures, you could generate a statistical profile for the boiler you’d need. Unrealistic production goals, overused resources, and unwise investment decisions could be dealt with quickly. “It is perfectly possible . . . to capture data at source in real time, and to process them instantly,” Beer later noted. “But we do not have the machinery for such instant data capture, nor do we have the sophisticated computer programs that would know what to do with such a plethora of information if we had it.”

Today, sensor-equipped boilers and tin cans report their data automatically, and in real time. And, just as Beer thought, data about our past behaviors can yield useful predictions. Amazon recently obtained a patent for “anticipatory shipping”—a technology for shipping products before orders have even been placed. Walmart has long known that sales of strawberry Pop-Tarts tend to skyrocket before hurricanes; in the spirit of computer-aided homeostasis, the company knows that it’s better to restock its shelves than to ask why.

[…]

Flowers suggests that real-time data analysis is allowing city agencies to operate in a cybernetic manner. Consider the allocation of building inspectors in a city like New York. If the city authorities know which buildings have caught fire in the past and if they have a deep profile for each such building—if, for example, they know that such buildings usually feature illegal conversions, and their owners are behind on paying property taxes or have a history of mortgage foreclosures—they can predict which buildings are likely to catch fire in the future and decide where inspectors should go first.

[…]

The aim is to replace rigid rules issued by out-of-touch politicians with fluid and personalized feedback loops generated by gadget-wielding customers. Reputation becomes the new regulation: why pass laws banning taxi-drivers from dumping sandwich wrappers on the back seat if the market can quickly punish such behavior with a one-star rating? It’s a far cry from Beer’s socialist utopia, but it relies on the same cybernetic principle: collect as much relevant data from as many sources as possible, analyze them in real time, and make an optimal decision based on the current circumstances rather than on some idealized projection.

[…]

It’s suggestive that Nest—the much admired smart thermostat, which senses whether you’re home and lets you adjust temperatures remotely—now belongs to Google, not Apple. Created by engineers who once worked on the iPod, it has a slick design, but most of its functionality (like its ability to learn and adjust to your favorite temperature by observing your behavior) comes from analyzing data, Google’s bread and butter. The proliferation of sensors with Internet connectivity provides a homeostatic solution to countless predicaments. Google Now, the popular smartphone app, can perpetually monitor us and (like Big Mother, rather than like Big Brother) nudge us to do the right thing—exercise, say, or take the umbrella.

Companies like Uber, meanwhile, insure that the market reaches a homeostatic equilibrium by monitoring supply and demand for transportation. Google recently acquired the manufacturer of a high-tech spoon—the rare gadget that is both smart and useful—to compensate for the purpose tremors that captivated Norbert Wiener. (There is also a smart fork that vibrates when you are eating too fast; “smart” is no guarantee against “dumb.”) The ubiquity of sensors in our cities can shift behavior: a new smart parking system in Madrid charges different rates depending on the year and the make of the car, punishing drivers of old, pollution-prone models. Helsinki’s transportation board has released an Uber-like app, which, instead of dispatching an individual car, coördinates multiple requests for nearby destinations, pools passengers, and allows them to share a much cheaper ride on a minibus.

[…]

For all its utopianism and scientism, its algedonic meters and hand-drawn graphs, Project Cybersyn got some aspects of its politics right: it started with the needs of the citizens and went from there. The problem with today’s digital utopianism is that it typically starts with a PowerPoint slide in a venture capitalist’s pitch deck. As citizens in an era of Datafeed, we still haven’t figured out how to manage our way to happiness. But there’s a lot of money to be made in selling us the dials.

 

Ref: The Planning Machine – The NewYorker

 

Algorithm Regulation

The car is emblematic of transformations in many other domains, from smart environments for “ambient assisted living” where carpets and walls detect that someone has fallen, to various masterplans for the smart city, where municipal services dispatch resources only to those areas that need them. Thanks to sensors and internet connectivity, the most banal everyday objects have acquired tremendous power to regulate behaviour. Even public toilets are ripe for sensor-based optimisation: the Safeguard Germ Alarm, a smart soap dispenser developed by Procter & Gamble and used in some public WCs in the Philippines, has sensors monitoring the doors of each stall. Once you leave the stall, the alarm starts ringing – and can only be stopped by a push of the soap-dispensing button.

In this context, Google’s latest plan to push its Android operating system on to smart watches, smart cars, smart thermostats and, one suspects, smart everything, looks rather ominous. In the near future, Google will be the middleman standing between you and your fridge, you and your car, you and your rubbish bin, allowing the National Security Agency to satisfy its data addiction in bulk and via a single window.

[…]

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O’Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term “web 2.0″) has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O’Reilly makes an intriguing case for the virtues of algorithmic regulation – a case that deserves close scrutiny both for what it promises policymakers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can’t write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it’s time to find another rule for finding a good rule – and so on. An algorithm can do this, but it’s the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it’s not just spam: your bank uses similar methods to spot credit-card fraud.

[…]

O’Reilly presents such technologies as novel and unique – we are living through a digital revolution after all – but the principle behind “algorithmic regulation” would be familiar to the founders of cybernetics – a discipline that, even in its name (it means “the science of governance”) hints at its great regulatory ambitions. This principle, which allows the system to maintain its stability by constantly learning and adapting itself to the changing circumstances, is what the British psychiatrist Ross Ashby, one of the founding fathers of cybernetics, called “ultrastability“.

[…]

Speaking in Athens last November, the Italian philosopher Giorgio Agamben discussed an epochal transformation in the idea of government, “whereby the traditional hierarchical relation between causes and effects is inverted, so that, instead of governing the causes – a difficult and expensive undertaking – governments simply try to govern the effects”.

[…]

The numerous possibilities that tracking devices offer to health and insurance industries are not lost on O’Reilly. “You know the way that advertising turned out to be the native business model for the internet?” he wondered at a recent conference. “I think that insurance is going to be the native business model for the internet of things.” Things do seem to be heading that way: in June, Microsoft struck a deal with American Family Insurance, the eighth-largest home insurer in the US, in which both companies will fund startups that want to put sensors into smart homesand smart cars for the purposes of “proactive protection”.

An insurance company would gladly subsidise the costs of installing yet another sensor in your house – as long as it can automatically alert the fire department or make front porch lights flash in case your smoke detector goes off. For now, accepting such tracking systems is framed as an extra benefit that can save us some money. But when do we reach a point where not using them is seen as a deviation – or, worse, an act of concealment – that ought to be punished with higher premiums?

Or consider a May 2014 report from 2020health, another thinktank, proposing to extend tax rebates to Britons who give up smoking, stay slim or drink less. “We propose ‘payment by results’, a financial reward for people who become active partners in their health, whereby if you, for example, keep your blood sugar levels down, quit smoking, keep weight off, [or] take on more self-care, there will be a tax rebate or an end-of-year bonus,” they state. Smart gadgets are the natural allies of such schemes: they document the results and can even help achieve them – by constantly nagging us to do what’s expected.

The unstated assumption of most such reports is that the unhealthy are not only a burden to society but that they deserve to be punished (fiscally for now) for failing to be responsible. For what else could possibly explain their health problems but their personal failings? It’s certainly not the power of food companies or class-based differences or various political and economic injustices. One can wear a dozen powerful sensors, own a smart mattress and even do a close daily reading of one’s poop – as some self-tracking aficionados are wont to do – but those injustices would still be nowhere to be seen, for they are not the kind of stuff that can be measured with a sensor. The devil doesn’t wear data. Social injustices are much harder to track than the everyday lives of the individuals whose lives they affect.

In shifting the focus of regulation from reining in institutional and corporate malfeasance to perpetual electronic guidance of individuals, algorithmic regulation offers us a good-old technocratic utopia of politics without politics. Disagreement and conflict, under this model, are seen as unfortunate byproducts of the analog era – to be solved through data collection – and not as inevitable results of economic or ideological conflicts.

Campaign to Stop Killer Robots

 

Over the past decade, the expanded use of unmanned armed vehicles has dramatically changed warfare, bringing new humanitarian and legal challenges. Now rapid advances in technology are resulting in efforts to develop fully autonomous weapons. These robotic weapons would be able to choose and fire on targets on their own, without any human intervention. This capability would pose a fundamental challenge to the protection of civilians and to compliance with international human rights and humanitarian law.

Several nations with high-tech militaries, including China, Israel, Russia, the United Kingdom, and the United States, are moving toward systems that would give greater combat autonomy to machines. If one or more chooses to deploy fully autonomous weapons, a large step beyond remote-controlled armed drones, others may feel compelled to abandon policies of restraint, leading to a robotic arms race. Agreement is needed now to establish controls on these weapons before investments, technological momentum, and new military doctrine make it difficult to change course.

Allowing life or death decisions to be made by machines crosses a fundamental moral line. Autonomous robots would lack human judgment and the ability to understand context. These qualities are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack.  As a result fully autonomous weapons would not meet the requirements of the laws of war.

Replacing human troops with machines could make the decision to go to war easier, which would shift the burden of armed conflict further onto civilians. The use of fully autonomous weapons would create an accountability gap as there is no clarity on who would be legally responsible for a robot’s actions: the commander, programmer, manufacturer, or robot itself? Without accountability, these parties would have less incentive to ensure robots did not endanger civilians and victims would be left unsatisfied that someone was punished for the harm they experienced.

 

Ref: Campaign to Stop Killer Robots

Machine That Predicts Future

 

An Iranian scientist has claimed to have invented a ‘time machine’ that can predict the future of any individual with a 98 per cent accuracy.

Serial inventor Ali Razeghi registered “The Aryayek Time Traveling Machine” with Iran’s state-run Centre for Strategic Inventions, The Telegraph reported.

According to a Fars news agency report, Mr Razeghi, 27, claims the machine uses algorithms to produce a print-out of the details of any individual’s life between five and eight years into their future.

Mr Razeghi, quoted in the Telegraph, said: “My invention easily fits into the size of a personal computer case and can predict details of the next 5-8 years of the life of its users. It will not take you into the future, it will bring the future to you.”

Razeghi is the managing director of Iran’s Centre for Strategic Invention and reportedly has another 179 inventions registered in his name.

He claims the invention could help the government predict military conflict and forecast fluctuations in the value of foreign currencies and oil prices.

According to Mr Razeghi his latest project has been criticised by his friends and family for “trying to play God”.

 

Ref: Iranian scientist claims to have invented ‘Time Machine’ that can predict the future – The Independent (via DarkGovernment)

Algorithmic Rape Jokes in Amazon

 

A t-shirt company called Solid Gold Bomb was caught selling shirts with the slogan “KEEP CALM and RAPE A LOT” on them. They also sold shirts like “KEEP CALM and CHOKE HER” and “KEEP CALM and PUNCHHER”. The Internet—especially the UK Internet—exploded.

How did this happen?

“Algorithms!”

[…]

Pete Ashton argues that—because the jokes were generated by a misbehaving script—“as mistakes go it’s a fairly excusable one, assuming they now act on it”. He suggests that the reason people got so upset was a lack of digital literacy. I suggest that the reason people got upset was that a company’s shoddy QA practices allowed a rape joke to go live.

Anyone who’s worked with software should know that the actual typing of code is a relatively small part of the overall programming work. Designing the program before you start coding, and debugging it after you’ve created it is the bulk of the job.

Generative programs are force multipliers. Small initial decisions can have massive consequences. The greater your reach, the greater your responsibility to manage your output. When Facebook makes an error that affects 0.1% of users, it means 1 million people got fucked up.

‘We didn’t cause a rape joke to happen, we allowed a rape joke to happen,’ is not a compelling excuse. It betrays a lack of digital literacy.

Interesting comments from people:

People, enough of the ‘A big algorithm did it and ran away’ explanations (eg. http://iam.peteashton.com/keep-calm-rape-tshirt-amazon/ …) – algorithms have politics too – @gsvoss

 

I’m REALLY tired of the “it’s the computer program” excuse for inexcusable behaviour. Behind every computer algorithm, a human being is sitting there programming. Use your “real” brains, you idiots, and join the real world. There are no excuses for this. None. Period. – jen

 

Not good enough, I’m afraid. The same company are still selling a t-shirt that says ‘Keep calm and hit her’.
No computer generated that. Why, for example, doesn’t it say ‘hit him’?
Because someone ran an eye over it to ensure it was sufficiently ‘funny’ I would say.
If they were genuinely horrified by what their algorithm produced that t-shirt would be gone too. Seems to me they’re just a bunch of sad gits. – Ita Ryan

 

 

Ref: Algorithmic Rape Jokes in the Library of Babel – QuietBabylon (via algopop)

Lies Detector U.S. Border

 

Since September 11, 2001, federal agencies have spent millions of dollars on research designed to detect deceptive behavior in travelers passing through US airports and border crossings in the hope of catching terrorists. Security personnel have been trained—and technology has been devised—to identify, as an air transport trade association representative once put it, “bad people and not just bad objects.” Yet for all this investment and the decades of research that preceded it, researchers continue to struggle with a profound scientific question: How can you tell if someone is lying?

That problem is so complex that no one, including the engineers and psychologists developing machines to do it, can be certain if any technology will work. “It fits with our notion of justice, somehow, that liars can’t really get away with it,” says Maria Hartwig, a social psychologist at John Jay College of Criminal Justice who cowrote a recent report on deceit detection at airports and border crossings. The problem is, as Hartwig explains it, that all the science says people are really good at lying, and it’s incredibly hard to tell when we’re doing it.

 
 

Ref: Deception Is Futile When Big Brother’s Lie Detector Turns Its Eyes on You – Wired

Did Google Earth Error Send Murderer to Wrong Address?

 

Sometimes, even after a murder conviction, some see reasonable doubt that the conviction was a righteous one.

Such is the case in the murder of Dennis and Merna Koula in La Cross, Wisc, a quiet community.

[…]

A neighbor of the Koula’s, Steve Burgess, freely admitted that he had received death threats. He was the president of a local bank.

And, as the CBS News investigation indicated (embedded, but there are some gaps in the audio), if you use Google Earth to locate Burgess’ house, you get a surprise.

“48 Hours” correspondent Peter Van Sant said: “In fact, when you Google Earth Steve Burgess’ address…the zoom into the house goes to the Koula’s house, not to Steve Burgess’ house.”

Police say they discounted the threatening caller, as they located him and he had an alibi. But then could that individual have hired someone to do any allegedly required dirty work, a person who used Google Earth to go to the wrong house?

 

Ref: Did Google Earth error send murderer to wrong address? – CNET

 

Machines à Gouverner

In 1948 a Dominican friar, Père Dubarle, wrote a review of Norbert Wiener book Cybernetics. In this article, he introduces a very interesting word “machines à gouverner”. Père Dubarle warns us against potential risks of having blind faith towards new sciences (machines/computers in this case) because human processes can’t be predicted with “cold mathematics”.

One of the most fascinating prospects thus opened is that of the rational conduct of human affairs, and in particular of those which interest communities and seem to present a certain statistical regularity, such as the hu­man phenomena of the development of opinion. Can’t one imagine a machine to collect this or that type of information, as for example information on production and the market; and then to determine as a function of the average psychology of human beings, and of the ‘i quantities which it is possible to measure in a determined instance, what the most probable development of the situation might be? Can’t one even conceive a State ap­ paratus covering all systems of political decisions, either under a regime of many states distributed over the earth, or under the apparently much more simple regime of a human government of this planet? At present nothing prevents our thinking of this. We may dream of the time when the machine a gouverner may come to supply­ whether for good or evil – the present obvious inade­quacy of the brain when the latter is concerned with the customary machinery of politics.

At all events, human realities do not admit a sharp and certain determination, as numerical data of computa­tion do. They only admit the determination of their prob­ able values. A machine to treat these processes, and the problems which they put, must therefore undertake the sort of probabilistic, rather than deterministic thought, such as is .exhibited for example in modem computing machines . This makes its task more complicated, but does not render it impossible. The prediction machine which determines the efficacy of anti-aircraft fire is an example of this. Theoretically, time prediction is not im­ possible; neither is the determination of the most favor­ able decision, at least within certain limits. The possibility of playing machines such as the chess-playing machine is considered to establish this. For the human processes which constitute the object of government may be assimilated to games in the sense in which von Neu­ mann has studied them mathematically. Even though these games have an incomplete set of rules, there are other games with a very large number of players, where the data are extremely complex. The machines a gouv­erner will define the State as the best-informed player at each particular level; and the State is the only su­ preme co-ordinator of all partial decisions. These are enormous privileges; if they are acquired scientifically, they will permit the State under all circumstances to beat every player of a human game other than itself by offering this dilemma: either immediate ruin, or planned co-operation. This will be the consequences of the game itself without outside violence. The lovers of the best of worlds have something indeed to dream of!

Despite all this, and perhaps fortunately, the machine a gouverner is not ready for a very near tomorrow. For outside of the very serious problems which the volume of information to be collected and to be treated rapidly still put, the problems of the stability of prediction re­ main beyond what we can seriously dream of controlling. For human processes are assimilable to games with in­ completely defined rules, and above all, with the rules themselves functions of the time. The variation of the rules depends both on the effective detail of the situa­tions engendered by the game itself, and on the system of psychological reactions of the players in the face of the results obtained at each instant.

It may even be more rapid than these. A very good example of this seems to be given by what happened to the Gallup Poll in the 1948 election. All this not only tends to complicate the degree of the factors which in­fluence prediction, but perhaps to make radically sterile the mechanical manipulation of human situations. As far as one can judge, only two conditions here can guarantee stabilization in the mathematical sense of the term. These are, on the one hand, a sufficient ignorance on the part of the mass of the players exploited by a skilled player, who moreover may plan a method of paralyzing the consciousness of the masses; or on the other, suffi­cient good-will to allow one, for the sake of the stability of the game, to refer his decisions to one or a few players of the game who have arbitrary privileges. This is a hard lesson of cold mathematics, but it throws a certain light on the adventure of our century: hesitation between an indefinite turbulence of human affairs and the rise of a prodigious Leviathan. In comparison with this, Hobbes’ Leviathan was nothing but a pleasant joke. We are run­ning the risk nowadays of a great World State, where deliberate and conscious primitive injustice may be the only possible condition for the statistical happiness of the masses: a world worse than hell for every clear mind. Perhaps it would not be a bad idea for the teams at present creating cybernetics to add to their cadre of technicians, who have come from all horizons of science, some serious anthropologists, and perhaps a philosopher who has some curiosity as to world matters.

 

Ref: L’avènement de l’informatique et de la cybernétique. Chronique d’une rupture annoncée – Revue Futuribles