Category Archives: W – past

CyberSyn – The origin of the Big Data Nation

 

That was a challenge: the Chilean government was running low on cash and supplies; the United States, dismayed by Allende’s nationalization campaign, was doing its best to cut Chile off. And so a certain amount of improvisation was necessary. Four screens could show hundreds of pictures and figures at the touch of a button, delivering historical and statistical information about production—the Datafeed—but the screen displays had to be drawn (and redrawn) by hand, a job performed by four young female graphic designers. […] In addition to the Datafeed, there was a screen that simulated the future state of the Chilean economy under various conditions. Before you set prices, established production quotas, or shifted petroleum allocations, you could see how your decision would play out.

One wall was reserved for Project Cyberfolk, an ambitious effort to track the real-time happiness of the entire Chilean nation in response to decisions made in the op room. Beer built a device that would enable the country’s citizens, from their living rooms, to move a pointer on a voltmeter-like dial that indicated moods ranging from extreme unhappiness to complete bliss. The plan was to connect these devices to a network—it would ride on the existing TV networks—so that the total national happiness at any moment in time could be determined. The algedonic meter, as the device was called (from the Greekalgos, “pain,” and hedone, “pleasure”), would measure only raw pleasure-or-pain reactions to show whether government policies were working.

[…]

“The on-line control computer ought to be sensorily coupled to events in real time,” Beer argued in a 1964 lecture that presaged the arrival of smart, net-connected devices—the so-called Internet of Things. Given early notice, the workers could probably solve most of their own problems. Everyone would gain from computers: workers would enjoy more autonomy while managers would find the time for long-term planning. For Allende, this was good socialism. For Beer, this was good cybernetics.

[…]

Suppose that the state planners wanted the plant to expand its cooking capacity by twenty per cent. The modelling would determine whether the target was plausible. Say the existing boiler was used at ninety per cent of capacity, and increasing the amount of canned fruit would mean exceeding that capacity by fifty per cent. With these figures, you could generate a statistical profile for the boiler you’d need. Unrealistic production goals, overused resources, and unwise investment decisions could be dealt with quickly. “It is perfectly possible . . . to capture data at source in real time, and to process them instantly,” Beer later noted. “But we do not have the machinery for such instant data capture, nor do we have the sophisticated computer programs that would know what to do with such a plethora of information if we had it.”

Today, sensor-equipped boilers and tin cans report their data automatically, and in real time. And, just as Beer thought, data about our past behaviors can yield useful predictions. Amazon recently obtained a patent for “anticipatory shipping”—a technology for shipping products before orders have even been placed. Walmart has long known that sales of strawberry Pop-Tarts tend to skyrocket before hurricanes; in the spirit of computer-aided homeostasis, the company knows that it’s better to restock its shelves than to ask why.

[…]

Flowers suggests that real-time data analysis is allowing city agencies to operate in a cybernetic manner. Consider the allocation of building inspectors in a city like New York. If the city authorities know which buildings have caught fire in the past and if they have a deep profile for each such building—if, for example, they know that such buildings usually feature illegal conversions, and their owners are behind on paying property taxes or have a history of mortgage foreclosures—they can predict which buildings are likely to catch fire in the future and decide where inspectors should go first.

[…]

The aim is to replace rigid rules issued by out-of-touch politicians with fluid and personalized feedback loops generated by gadget-wielding customers. Reputation becomes the new regulation: why pass laws banning taxi-drivers from dumping sandwich wrappers on the back seat if the market can quickly punish such behavior with a one-star rating? It’s a far cry from Beer’s socialist utopia, but it relies on the same cybernetic principle: collect as much relevant data from as many sources as possible, analyze them in real time, and make an optimal decision based on the current circumstances rather than on some idealized projection.

[…]

It’s suggestive that Nest—the much admired smart thermostat, which senses whether you’re home and lets you adjust temperatures remotely—now belongs to Google, not Apple. Created by engineers who once worked on the iPod, it has a slick design, but most of its functionality (like its ability to learn and adjust to your favorite temperature by observing your behavior) comes from analyzing data, Google’s bread and butter. The proliferation of sensors with Internet connectivity provides a homeostatic solution to countless predicaments. Google Now, the popular smartphone app, can perpetually monitor us and (like Big Mother, rather than like Big Brother) nudge us to do the right thing—exercise, say, or take the umbrella.

Companies like Uber, meanwhile, insure that the market reaches a homeostatic equilibrium by monitoring supply and demand for transportation. Google recently acquired the manufacturer of a high-tech spoon—the rare gadget that is both smart and useful—to compensate for the purpose tremors that captivated Norbert Wiener. (There is also a smart fork that vibrates when you are eating too fast; “smart” is no guarantee against “dumb.”) The ubiquity of sensors in our cities can shift behavior: a new smart parking system in Madrid charges different rates depending on the year and the make of the car, punishing drivers of old, pollution-prone models. Helsinki’s transportation board has released an Uber-like app, which, instead of dispatching an individual car, coördinates multiple requests for nearby destinations, pools passengers, and allows them to share a much cheaper ride on a minibus.

[…]

For all its utopianism and scientism, its algedonic meters and hand-drawn graphs, Project Cybersyn got some aspects of its politics right: it started with the needs of the citizens and went from there. The problem with today’s digital utopianism is that it typically starts with a PowerPoint slide in a venture capitalist’s pitch deck. As citizens in an era of Datafeed, we still haven’t figured out how to manage our way to happiness. But there’s a lot of money to be made in selling us the dials.

 

Ref: The Planning Machine – The NewYorker

 

De Arte Combinatoria

The Dissertatio de arte combinatoria is an early work by Gottfried Leibniz published in 1666 in Leipzig. It is an extended version of his doctoral dissertation, written before the author had seriously undertaken the study of mathematics. The booklet was reissued without Leibniz’ consent in 1690, which prompted him to publish a brief explanatory notice in the Acta Eruditorum. During the following years he repeatedly expressed regrets about its being circulated as he considered it immature  Nevertheless it was a very original work and it provided the author the first glimpse of fame among the scholars of his time.

The main idea behind the text is that of an alphabet of human thought, which is attributed to Descartes. All concepts are nothing but combinations of a relatively small number of simple concepts, just as words are combinations of letters. All truths may be expressed as appropriate combinations of concepts, which can in turn be decomposed into simple ideas, rendering the analysis much easier. Therefore, this alphabet would provide a logic of invention, opposed to that of demonstration which was known so far. Since all sentences are composed of a subject and a predicate, one might

The first examples of use of his ars combinatoria are taken from law, the musical registry of an organ, and the Aristotelian theory of generation of elements from the four primary qualities. But philosophical applications are of greater importance. He cites the idea of Hobbes that all reasoning is just a computation.

Algorithms <-> Taylorism

By breaking down every job into a sequence of small, discrete steps and then testing different ways of performing each one, Taylor created a set of precise instructions—an “algorithm,” we might say today—for how each worker should work. Midvale’s employees grumbled about the strict new regime, claiming that it turned them into little more than automatons, but the factory’s productivity soared.

More than a hundred years after the invention of the steam engine, the Industrial Revolution had at last found its philosophy and its philosopher. Taylor’s tight industrial choreography—his “system,” as he liked to call it—was embraced by manufacturers throughout the country and, in time, around the world. Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-and-motion studies to organize their work and configure the jobs of their workers. The goal, as Taylor defined it in his celebrated 1911 treatise, The Principles of Scientific Management, was to identify and adopt, for every job, the “one best method” of work and thereby to effect “the gradual substitution of science for rule of thumb throughout the mechanic arts.” Once his system was applied to all acts of manual labor, Taylor assured his followers, it would bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency. “In the past the man has been first,” he declared; “in the future the system must be first.”

Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”—the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.”

 

Ref: Is Google Making Us Stupid? – The Atlantic

Computer H14

 

 

By the early ’60s, Byrne explains, companies had grown to depend on enormous IBM mainframe computers, and they were forced to install a new mainframe at each and every one of their branch offices. AT&T aimed to replace all those duplicate machines with a system that would allow a single mainframe to communicate with several remote locations via high-speed data connections. Ma Bell already had a near monopoly on voice communications, and this was its next conquest.

The rub was that many people feared a robopocalypse — a dystopian world where machines made man obsolete. Ma Bell also needed to reassure people that its machine-to-machine communication wouldn’t take over the planet. And what better way to ease their fears than Computer H14?

[…]

Luckily, H14 diagnoses the problem — a lapse in data communications and a missing circuit — and he provides a set of “flawless” recommendations that result in increased productivity, improved performance, and gobs of extra time for Charlie Magnetico — played by Juhl — to think all sorts of big thoughts. In short, AT&T’s machine-to-machine communications save the day.

But in the end, this film conveys much the same message as the one that came before it: Machines can make life easier, but not without the help of humans. H14′s recommendations are flawless only until one of those missiles nearly lands on his head.

 

Ref: Tech Time Warp of the Week: Jim Henson’s Muppet Computer, 1963 – Wired

80’s IBM Watson

 

IBM’s Watson supercomputer may be boning up on its medical bona fides, but the concept of Dr. Watson is nothing new. We’ve been waiting on our super-smart computer doctors of tomorrow for over 30 years.

The 1982 book World of Tomorrow: Health and Medicine by Neil Ardley showed kids of the 1980s what the doctor’s office of the future was going to look like. The room is filled with automatic diagnosis stations, prescription vending machines, and plenty of control panels sporting colorful buttons. The only thing that’s missing is, well, a doctor.

From the book:

A visit to the doctor in the future is likely to resemble a computer game, for computers will be greatly involved in medical care. Now doctors have to question and examine their patients to find out what is wrong with them. They compare the patients’ answers and the examination results with their own knowledge of medical conditions and illnesses. This enables doctors to decide on the causes of the patients’ problems.

Computers can store huge amounts of medical information. Doctors are therefore likely to use computers to help them find the causes of illnesses. The computer could take over completely, allowing doctors to concentrate on patients who need personal care.

The computer won’t just be a dumb machine that’s fed info. The robo-doctor of tomorrow will be able to ask questions of the patient, narrowing down all the possible things that could be wrong.

The computer will question the patient about an illness just as the doctor does now. It will either display words on a screen or speak to the patient, who will reply or operate a keyboard to answer. The questions will continue until the computer has either narrowed down the possible causes of the illness to one or needs more information that the patient cannot give by answering.

The patient will then go to a machine that checks his or her physical condition. It will measure such factors as pulse, temperature and blood pressure and maybe look into the interior of the patient’s body. The results will go to the computer. This may still not provide the computer with enough information about the patient, and it may need to take samples — for example, of blood or hair. It will do this painlessly.

Push-Button Culture



Today our abundance of smartphones, computers, dishwashers and electric vacuum cleaners all supposedly leave more time for the 21st century human to lounge around and eat bonbons. Just push a button, and everything is automatic.

[…]

Doing the laundry in 1950 may have become much easier thanks to the rise of electric washing machines, but the societal expectations around how often one’s clothes should be cleaned shifted dramatically since, say, 1900. Cleaning a floor was decidedly harder in 1910 than it was in 1960 but the relative ease of use for appliances like the electric vacuum cleaner changed American expectations about what constituted “clean.”

[…]

But the promise of the push-button as the gateway to a life of leisure has its origins far earlier than the Space Age. Dating back to the late 19th and early 20th century, the push-button quickly evolved into the perfectly simple symbol of modernity back when electricity itself was first being introduced to American homes.

Whether it was for ringing doorbells, illuminating lamps, hailing domestic servants, or turning on any number of new electrical appliances, the push button arrived in full force as an interface that was supposed to save time and generally make life easier. Just push a button, and it’s all done automatically!

As Americans came into contact with more and more machines in the late 19th century, the push button was supposed to ameliorate heightened anxieties about the complexity of life—what Rachel Plotnick describes as a “pervasive cultural craving for efficient relationships between humans and machines.” Plotnick writes about the button as the interface of leisure in her 2012 paper “At the Interface: The Case of the Electric Push Button, 1880-1923.”



Ref: Will The Internet of Things Make Our Lives Any Easier? – Paleofuture
Ref: Push-Button Promises – psmag

Pygmalion <-> AI

 

Artificial intelligence is arguably the most useless technology that humans have ever aspired to possess. Actually, let me clarify. It would be useful to have a robot that could make independent decisions while, say, exploring a distant planet, or defusing a bomb. But the ultimate aspiration of AI was never just to add autonomy to a robot’s operating system. The idea wasn’t to enable a computer to search data faster by ‘understanding patterns’, or communicate with its human masters via natural language. The dream of AI was — and is — to create a machine that is conscious. AI means building a mechanical human being. And this goal, as supposedly rational technological projects go, is deeply strange.

[…]

Technology is a cultural phenomenon, and as such it is molded by our cultural values. We prefer good health to sickness so we develop medicine. We value wealth and freedom over poverty and bondage, so we invent markets and the multitudinous thingummies of comfort. We are curious, so we aim for the stars. Yet when it comes to creating conscious simulacra of ourselves, what exactly is our motive? What deep emotions drive us to imagine, and strive to create, machines in our own image? If it is not fear, or want, or curiosity, then what is it? Are we indulging in abject narcissism? Are we being unforgivably vain? Or could it be because of love?

But machines were objects of erotic speculation long before Turing entered the scene. Western literature, ancient and modern, is strewn with mechanical lovers. Consider Pygmalion, the Cypriot sculptor and favorite of Aphrodite. Ovid, in his Metamorphoses, describes him carving a perfect woman out of ivory. Her name is Galatea and she’s so lifelike that Pygmalion immediately falls in love with her. He prays to Aphrodite to make the statue come to life. The love goddess already knows a thing or two about beautiful, non-biological maidens: her husband Hephaestus has constructed several good-looking fembots to lend a hand in his Olympian workshop. She grants Pygmalion’s wish; Pygmalion kisses his perfect creation, and Galatea becomes a real woman. They live happily ever after.

[…]

As the 20th century came into its own, Pygmalion collided with modernity and its various theories about the human mind: psychoanalysis, behaviourist psychology, the tabula rasa whereby one writes the algorithm of personhood upon a clean slate. Galatea becomes Maria, the robot in Fritz Lang’s epic film Metropolis (1927); she is less innocent now, a temptress performing the manic and deeply erotic dance of Babylon in front of goggling men.

 

Ref: Love Machines – Aeon Magazine

Machines à Gouverner

In 1948 a Dominican friar, Père Dubarle, wrote a review of Norbert Wiener book Cybernetics. In this article, he introduces a very interesting word “machines à gouverner”. Père Dubarle warns us against potential risks of having blind faith towards new sciences (machines/computers in this case) because human processes can’t be predicted with “cold mathematics”.

One of the most fascinating prospects thus opened is that of the rational conduct of human affairs, and in particular of those which interest communities and seem to present a certain statistical regularity, such as the hu­man phenomena of the development of opinion. Can’t one imagine a machine to collect this or that type of information, as for example information on production and the market; and then to determine as a function of the average psychology of human beings, and of the ‘i quantities which it is possible to measure in a determined instance, what the most probable development of the situation might be? Can’t one even conceive a State ap­ paratus covering all systems of political decisions, either under a regime of many states distributed over the earth, or under the apparently much more simple regime of a human government of this planet? At present nothing prevents our thinking of this. We may dream of the time when the machine a gouverner may come to supply­ whether for good or evil – the present obvious inade­quacy of the brain when the latter is concerned with the customary machinery of politics.

At all events, human realities do not admit a sharp and certain determination, as numerical data of computa­tion do. They only admit the determination of their prob­ able values. A machine to treat these processes, and the problems which they put, must therefore undertake the sort of probabilistic, rather than deterministic thought, such as is .exhibited for example in modem computing machines . This makes its task more complicated, but does not render it impossible. The prediction machine which determines the efficacy of anti-aircraft fire is an example of this. Theoretically, time prediction is not im­ possible; neither is the determination of the most favor­ able decision, at least within certain limits. The possibility of playing machines such as the chess-playing machine is considered to establish this. For the human processes which constitute the object of government may be assimilated to games in the sense in which von Neu­ mann has studied them mathematically. Even though these games have an incomplete set of rules, there are other games with a very large number of players, where the data are extremely complex. The machines a gouv­erner will define the State as the best-informed player at each particular level; and the State is the only su­ preme co-ordinator of all partial decisions. These are enormous privileges; if they are acquired scientifically, they will permit the State under all circumstances to beat every player of a human game other than itself by offering this dilemma: either immediate ruin, or planned co-operation. This will be the consequences of the game itself without outside violence. The lovers of the best of worlds have something indeed to dream of!

Despite all this, and perhaps fortunately, the machine a gouverner is not ready for a very near tomorrow. For outside of the very serious problems which the volume of information to be collected and to be treated rapidly still put, the problems of the stability of prediction re­ main beyond what we can seriously dream of controlling. For human processes are assimilable to games with in­ completely defined rules, and above all, with the rules themselves functions of the time. The variation of the rules depends both on the effective detail of the situa­tions engendered by the game itself, and on the system of psychological reactions of the players in the face of the results obtained at each instant.

It may even be more rapid than these. A very good example of this seems to be given by what happened to the Gallup Poll in the 1948 election. All this not only tends to complicate the degree of the factors which in­fluence prediction, but perhaps to make radically sterile the mechanical manipulation of human situations. As far as one can judge, only two conditions here can guarantee stabilization in the mathematical sense of the term. These are, on the one hand, a sufficient ignorance on the part of the mass of the players exploited by a skilled player, who moreover may plan a method of paralyzing the consciousness of the masses; or on the other, suffi­cient good-will to allow one, for the sake of the stability of the game, to refer his decisions to one or a few players of the game who have arbitrary privileges. This is a hard lesson of cold mathematics, but it throws a certain light on the adventure of our century: hesitation between an indefinite turbulence of human affairs and the rise of a prodigious Leviathan. In comparison with this, Hobbes’ Leviathan was nothing but a pleasant joke. We are run­ning the risk nowadays of a great World State, where deliberate and conscious primitive injustice may be the only possible condition for the statistical happiness of the masses: a world worse than hell for every clear mind. Perhaps it would not be a bad idea for the teams at present creating cybernetics to add to their cadre of technicians, who have come from all horizons of science, some serious anthropologists, and perhaps a philosopher who has some curiosity as to world matters.

 

Ref: L’avènement de l’informatique et de la cybernétique. Chronique d’une rupture annoncée – Revue Futuribles