Algorithms <-> Taylorism

By breaking down every job into a sequence of small, discrete steps and then testing different ways of performing each one, Taylor created a set of precise instructions—an “algorithm,” we might say today—for how each worker should work. Midvale’s employees grumbled about the strict new regime, claiming that it turned them into little more than automatons, but the factory’s productivity soared.

More than a hundred years after the invention of the steam engine, the Industrial Revolution had at last found its philosophy and its philosopher. Taylor’s tight industrial choreography—his “system,” as he liked to call it—was embraced by manufacturers throughout the country and, in time, around the world. Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-and-motion studies to organize their work and configure the jobs of their workers. The goal, as Taylor defined it in his celebrated 1911 treatise, The Principles of Scientific Management, was to identify and adopt, for every job, the “one best method” of work and thereby to effect “the gradual substitution of science for rule of thumb throughout the mechanic arts.” Once his system was applied to all acts of manual labor, Taylor assured his followers, it would bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency. “In the past the man has been first,” he declared; “in the future the system must be first.”

Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”—the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.”


Ref: Is Google Making Us Stupid? – The Atlantic

Automation Can Take a Toll on Human’s Performance

Automation has become so sophisticated that on a typical passenger flight, a human pilot holds the controls for a grand total of just three minutes. What pilots spend a lot of time doing is monitoring screens and keying in data. They’ve become, it’s not much of an exaggeration to say, computer operators.

And that, many aviation and automation experts have concluded, is a problem. Overuse of automation erodes pilots’ expertise and dulls their reflexes, leading to what Jan Noyes, an ergonomics expert at Britain’s University of Bristol, terms “a de-skilling of the crew.” No one doubts that autopilot has contributed to improvements in flight safety over the years. It reduces pilot fatigue and provides advance warnings of problems, and it can keep a plane airborne should the crew become disabled. But the steady overall decline in plane crashes masks the recent arrival of “a spectacularly new type of accident,” says Raja Parasuraman, a psychology professor at George Mason University and a leading authority on automation. When an autopilot system fails, too many pilots, thrust abruptly into what has become a rare role, make mistakes.

The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world. That has always been true, but in recent years, as the locus of labor-saving technology has shifted from machinery to software, automation has become ever more pervasive, even as its workings have become more hidden from us. Seeking convenience, speed, and efficiency, we rush to off-load work to computers without reflecting on what we might be sacrificing as a result.


A hundred years ago, the British mathematician and philosopher Alfred North Whitehead wrote, “Civilization advances by extending the number of important operations which we can perform without thinking about them.” It’s hard to imagine a more confident expression of faith in automation. Implicit in Whitehead’s words is a belief in a hierarchy of human activities: Every time we off-load a job to a tool or a machine, we free ourselves to climb to a higher pursuit, one requiring greater dexterity, deeper intelligence, or a broader perspective. We may lose something with each upward step, but what we gain is, in the long run, far greater.

History provides plenty of evidence to support Whitehead. We humans have been handing off chores, both physical and mental, to tools since the invention of the lever, the wheel, and the counting bead. But Whitehead’s observation should not be mistaken for a universal truth. He was writing when automation tended to be limited to distinct, well-defined, and repetitive tasks—weaving fabric with a steam loom, adding numbers with a mechanical calculator. Automation is different now. Computers can be programmed to perform complex activities in which a succession of tightly coordinated tasks is carried out through an evaluation of many variables. Many software programs take on intellectual work—observing and sensing, analyzing and judging, even making decisions—that until recently was considered the preserve of humans. That may leave the person operating the computer to play the role of a high-tech clerk—entering data, monitoring outputs, and watching for failures. Rather than opening new frontiers of thought and action, software ends up narrowing our focus. We trade subtle, specialized talents for more routine, less distinctive ones.

Most of us want to believe that automation frees us to spend our time on higher pursuits but doesn’t otherwise alter the way we behave or think. That view is a fallacy—an expression of what scholars of automation call the “substitution myth.” A labor-saving device doesn’t just provide a substitute for some isolated component of a job or other activity. It alters the character of the entire task, including the roles, attitudes, and skills of the people taking part. As Parasuraman and a colleague explained in a 2010 journal article, “Automation does not simply supplant human activity but rather changes it, often in ways unintended and unanticipated by the designers of automation.”

Psychologists have found that when we work with computers, we often fall victim to two cognitive ailments—complacency and bias—that can undercut our performance and lead to mistakes. Automation complacency occurs when a computer lulls us into a false sense of security. Confident that the machine will work flawlessly and handle any problem that crops up, we allow our attention to drift. We become disengaged from our work, and our awareness of what’s going on around us fades. Automation bias occurs when we place too much faith in the accuracy of the information coming through our monitors. Our trust in the software becomes so strong that we ignore or discount other information sources, including our own eyes and ears. When a computer provides incorrect or insufficient data, we remain oblivious to the error.

Examples of complacency and bias have been well documented in high-risk situations—on flight decks and battlefields, in factory control rooms—but recent studies suggest that the problems can bedevil anyone working with a computer. Many radiologists today use analytical software to highlight suspicious areas on mammograms. Usually, the highlights aid in the discovery of disease. But they can also have the opposite effect. Biased by the software’s suggestions, radiologists may give cursory attention to the areas of an image that haven’t been highlighted, sometimes overlooking an early-stage tumor. Most of us have experienced complacency when at a computer. In using e-mail or word-processing software, we become less proficient proofreaders when we know that a spell-checker is at work.


Who needs humans, anyway? That question, in one rhetorical form or another, comes up frequently in discussions of automation. If computers’ abilities are expanding so quickly and if people, by comparison, seem slow, clumsy, and error-prone, why not build immaculately self-contained systems that perform flawlessly without any human oversight or intervention? Why not take the human factor out of the equation? The technology theorist Kevin Kelly, commenting on the link between automation and pilot error, argued that the obvious solution is to develop an entirely autonomous autopilot: “Human pilots should not be flying planes in the long run.” The Silicon Valley venture capitalist Vinod Khosla recently suggested that health care will be much improved when medical software—which he has dubbed “Doctor Algorithm”—evolves from assisting primary-care physicians in making diagnoses to replacing the doctors entirely. The cure for imperfect automation is total automation.


Ref: All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines – The Atlantic

Deep Learning


“We see deep learning as a way to push sentiment understanding closer to human-level ability — whereas previous models have leveled off in terms of performance,” says Richard Socher, the Stanford University graduate student who developed NaSent together with artificial-intelligence researchers Chris Manning and Andrew Ng, one of the engineers behind Google’s deep learning project.

The aim, Socher says, is to develop algorithms that can operate without continued help from humans. “In the past, sentiment analysis has largely focused on models that ignore word order or rely on human experts,” he says. “While this works for really simple examples, it will never reach human-level understanding because word meaning changes in context and even experts cannot accurately define all the subtleties of how sentiment works. Our deep learning model solves both problems.”

Here is a live demo of their deep learning algorithm NaSent (Sentiment Analysis)


Ref: These Guys Are Teaching Computers How to Think Like People – Wired

Automated Generation of Suggestions for Personalized Reactions

Google has recently patented the Automated Generation of Suggestions for Personalized Reactions in a Social Network; a technology that hopes to be of assistance in sustaining social network etiquette by finding messages and social events worth responding to (such as birthdays) and auto-generating response suggestions that match your customary social behaviour which is machine-learned over long-term use.

There is no requirement for the user to set reminders or be proactive. The system automatically without user input analyzes information to which the user has access, and generates suggestions for personalized reactions to messages. The suggestion analyzer cooperates with the decision tree to learn the user’s behavior and automatically adjust the suggested messages that are generated over time.


Ref: algopop

Google’s Algorithms Outsmart its Human Employees

Google’s “deep learning” clusters of computers churn through massives chunks of data looking for patterns–and it seems they’ve gotten good at it. So good, in fact, that Google announced at the Machine Learning Conference in San Francisco that its deep learning clusters have learned to recognize objects on their own.
Traditionally, computers have been great at transporting data, but terrible at understanding what’s contained therein. The goal of movements like the semantic web have been to build webpages that understand what kind of content they’re serving, but advances have been slow in coming. Whereas common pigeons have the conceptual ability to tell the difference between a tree and a shrub, even the most expensive supercomputers today would struggle.

Google software engineer Quoc V. Le said at the conference that he realized the deep learning clusters had made a breakthrough when they were able to recognize discrete workplace objects, such as distinguishing two different brands of paper shredders. Interestingly, Lee didn’t train the machines this way–the software had figured that out on its own.


To be clear, Google is not afraid that this will blow up into a fully sentient computer system–but it’s a stepping stone toward Google’s goal to get its computers solving menial problems in order to free up human engineers, who currently spend innumerable hours programming data processing solutions. Google’s using its deep learning clusters to better auto-recognize things in images, Android’s voice recognition, and Google Translate, among others.


Ref: How Google’s “Deep Learning” Is Outsmarting Its Human Employees – FastCompany
Ref: If this doesn’t terrify you… Google’s computers OUTWIT their humans – TheRegister

How to Burst the Filter Bubble that Protects Us

Today, Eduardo Graells-Garrido at the Universitat Pompeu Fabra in Barcelona as well as Mounia Lalmas and Daniel Quercia, both at Yahoo Labs, say they’ve hit on a way to burst the filter bubble. Their idea that although people may have opposing views on sensitive topics, they may also share interests in other areas. And they’ve built a recommendation engine that points these kinds of people towards each other based on their own preferences.

The result is that individuals are exposed to a much wider range of opinions, ideas and people than they would otherwise experience. And because this is done using their own interests, they end up being equally satisfied with the results (although not without a period of acclimitisation). “We nudge users to read content from people who may have opposite views, or high view gaps, in those issues, while still being relevant according to their preferences,” say Graells-Garrido and co.


The results show that people can be more open than expected to ideas that oppose their own. It turns out that users who openly speak about sensitive issues are more open to receive recommendations authored by people with opposing views, say Graells-Garrido and co.

They also say that challenging people with new ideas makes them generally more receptive to change. That has important implications for social media sites. There is good evidence that users can sometimes become so resistant to change than any form of redesign dramatically reduces the popularity of the service. Giving them a greater range of content could change that.


Ref: How to Burst the “Filter Bubble” that Protects Us from Opposing Views – MIT Technology Review

Facebook Love’s Formula

At least, not yet. But you might think so if you’ve read the internet lately, which has has been abuzz with a new study that shows how an algorithm can accurately guess a user’s romantic partner or foretell a potential breakup based on the structure of his or her social network. All this is true—sort of. (We’ll get to that in a bit). More than anything, the algorithms and how they work demonstrate how Facebook is inching closer to producing predictive, even counterintuitive insights about our lives.


By analyzing social networks with the dispersion algorithm rather than embeddedness, Kleinberg and Backstrom were able to accurately guess a user’s spouse correctly 60 percent of the time and a non-marital romantic partner nearly 50 percent of the time. Those are pretty impressive numbers, given that the data set comprised 1.3 million Facebook random users who were at least 20 years old, had between 50 and 2,000 friends and noted some form or relationship status on their profile. (The odds of guessing a randomly guessing a partner would thus range between 1-in-50 to 1-in-200—or between 30 and 120 times less than the results achieved by Kleinberg and Backstrom.)


But perhaps the most fascinating idea within the research came when you flip this formulation around. What happens to relationships when people’s social networks aren’t many-tentacled? Turns out, in cases where there was low dispersion (where the couple had a lot of mutual friends, but from distinct social circles), couples were 50 percent more likely to change their status to “single.” Put another way, having social networks that mirror each other too closely in one particular part of your life seems to result in more transitory romances. There’s a common sense way of reframing this idea: After all, how many people have you broken up with right around the time that it became clear that your friend circles weren’t gelling?


Ref: Facebook Inches Closer to Figuring Out the Formula for Love – Wired

Ethical Implications of Engineers’ Works

The algorithms that extract highly specific information from an otherwise impenetrable amount of data have been conceived and built by flesh and blood, engineers with highly sophisticated technical knowledge. Did they know the use to which their algorithms would be put? If not, should they have been mindful of the potential for misuse? Either way, should they be held partly responsible or were they just “doing their job”?


Our ethics have become mostly technical: how to design properly, how to not cut corners, how to serve our clients well. We work hard to prevent failure of the systems we build, but only in relation to what these systems are meant to do, rather than the way they might actually be utilised, or whether they should have been built at all. We are not amoral, far from it; it’s just that we have steered ourselves into a place where our morality has a smaller scope.

Engineers have, in many ways, built the modern world and helped improve the lives of many. Of this, we are rightfully proud. What’s more, only a very small minority of engineers is in the business of making weapons or privacy-invading algorithms. However, we are part and parcel of industrial modernity with all its might, advantages and flaws, and we we therefore contribute to human suffering as well as flourishing


Ref: As engineers, we must consider the ethical implications of our work – TheGuardian

Autocomplete Feature “Destroy Man’s Life”

A mild-mannered man says his life was completely ruined after Google’s autocomplete feature convinced the government he was building a bomb.

Though he intended to search the web for “How do I build a radio-controlled airplane,” Jeffrey Kantor, then a government contractor, says the search engine auto-completed his request, turning it into “”How do I build a radio controlled bomb?”

Before he realized Google’s error, Kantor had already pressed enter, sparking a chain reaction he says resulted in months of harassment by government officials leading up to his eventual termination.


Ref: Man Says Google’s Autocomplete Feature Destroyed His Life – Gawker


Belief–desire–intention software model

The belief–desire–intention software model (usually referred to simply, but ambiguously, as BDI) is a software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent’s beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan library or an external planner application) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer.