N/A

When I ask Sebastian Benthall, of UC Berkeley’s School of Information, this question—if he thinks contemporary A.I. could become sentimental—he tells me that in many ways, an emotional reaction from programs already does happen, we just don’t call it that yet. “Why does your GPS make mistakes? Why do search engines lead you in certain directions and not others? If this were an individual, we’d call these things biases, or sentiment. But that’s because we think of ourselves as one being, when really there are a lot of biological systems cooperating with each other, but sometimes, independent of each other. A.I. doesn’t see itself that way.”

 

Ref: Trying on the retro flesh – Omnireboot

Yaskawa Ankle Exoskeleton

 

Walking is healthy, so all of the pundits and experts say. While this simple cardiovascular activity may be essential for our inner organs, it can wreak havoc on our outer ones. Case in point? The ankle and the knee are especially susceptible to walking-related strain and injury. A whole line of “safe” shoes have entered the market to sate the demand of consumers looking for a less impactful way to walk or run. Now, there is an awesome tech-heavy exoskeleton that turns us from walkers to, uh, robotic super-walkers. We won’t be able to leap a building in a single bound but maybe we can get close.

 

Ref: Yaskawa développe une robot pour l’aide à la marche – Humanoïde

A New Machine Ecology is Evolving

The problem, however, is that this new digital environment features agents that are not only making decisions faster than we can comprehend, they are also making decisions in a way that defies traditional theories of finance. In other words, it has taken on the form of a machine ecology — one that includes virtual predators and prey.

Consequently, computer scientists are taking an ecological perspective by looking at the new environment in terms of a competitive population of adaptive trading agents.

“Even though each trading algorithm/robot is out to gain a profit at the expense of any other, and hence act as a predator, any algorithm which is trading has a market impact and hence can become noticeable to other algorithms,” said Neil Johnson, a professor of physics at the College of Arts and Sciences at the University of Miami (UM) and lead author of the new study. “So although they are all predators, some can then become the prey of other algorithms depending on the conditions. Just like animal predators can also fall prey to each other.”

When there’s a normal combination of prey and predators, he says, everything is in balance. But once predators are introduced that are too fast, they create extreme events.

“What we see with the new ultrafast computer algorithms is predatory trading,” he says. “In this case, the predator acts before the prey even knows it’s there.”

[…]

“It simply is faster than human predators (i.e. human traders) and the humans are inactive on that fast timescale,” says Johnson. “So the only active traders at subsecond timescales are all robots. So they compete against each other, and their collective actions define the movements in the market.”

In other words, they control the market movements. “Humans become inert and ineffective,” he says, “What we found, which is so surprising, is that the transition to the new ultrafast robotic ecology is so abrupt and strong.”

 

Ref: A new digital ecology is evolving, and humans are being left behind – io9
Ref:  Abrupt rise of new machine ecology beyond human response time – Nature

 

Soldiers are Developing Relationships with Their Battlefield Robots

 

Robots are playing an ever-increasing role on the battlefield. As a consequence, soldiers are becoming attached to their robots, assigning names, gender — and even holding funerals when they’re destroyed. But could these emotional bonds affect outcomes in the war zone?

Through her interviews, she learned that soldiers often anthropomorphize their robots and feel empathy towards them. Many soldiers see their robots as extensions of themselves and are often frustrated with technical limitations or mechanical issues which they project onto themselves. Some operators can even tell who’s controlling a specific robot by watching the way it moves.

“They were very clear it was a tool, but at the same time, patterns in their responses indicated they sometimes interacted with the robots in ways similar to a human or pet,” Carpenter said.

Many of the soldiers she talked to named their robots, usually after a celebrity or current wife or girlfriend (never an ex). Some even painted the robot’s name on the side. Even so, the soldiers told Carpenter the chance of the robot being destroyed did not affect their decision-making over whether to send their robot into harm’s way.

Soldiers told Carpenter their first reaction to a robot being blown up was anger at losing an expensive piece of equipment, but some also described a feeling of loss.

“They would say they were angry when a robot became disabled because it is an important tool, but then they would add ‘poor little guy,’ or they’d say they had a funeral for it,” Carpenter said. “These robots are critical tools they maintain, rely on, and use daily. They are also tools that happen to move around and act as a stand-in for a team member, keeping Explosive Ordnance Disposal personnel at a safer distance from harm.”

 

Ref: Soldiers are developing relationships with their battlefield robots – io9
Ref: Emotional attachment to robots could affect outcome on battlefield – University of Washington

Google To Fight Aging with Data

The September 30 issue of TIME will profile Page and his decision to launch Calico. From the magazine’s preview article:

“Based in the Bay Area, not far from Google’s headquarters, Calico will be making longer-term bets than most health care firms. “In some industries, it takes ten or 20 years to go from an idea to something being real. Healthcare is certainly one of those areas,” said Page. “Maybe we should shoot for the things that are really, really important so ten or 20 years from now we have those things done.”

[…]

Google is keeping its exact plans close to the vest. But it is likely to use its data-processing might to shed new light on age-related maladies. Sources close to the project suggest Calico will start with a small number of employees and focus initially on researching new technology.

That approach may yield unlikely conclusions. “Are people really focused on the right things? One of the things I thought was amazing is that if you solve cancer, you’d add about three years to people’s average life expectancy,” Page said. “We think of solving cancer as this huge thing that’ll totally change the world. But when you really take a step back and look at it, yeah, there are many, many tragic cases of cancer, and it’s very, very sad, but in the aggregate, it’s not as big an advance as you might think.”

 

Ref: Google wants you to live forever – io9
Ref: Google to fight aging with new health startup Calico – Singularity Hub 

Algorithm Ethics

An algorithm is a structured description on how to calculate things. Some of the most prominent examples of algorithms have been around for more than 2500 years like Euklid’s algorithm that gives you the greatest common divisor or Erathostenes’ sieve to give you all prime numbers up to a given maximum. These two algorithms do not contain any kind of value judgement. If I define a new method for selecting prime numbers – and many of those have been publicized! – every algorithm will come to the same solution. A number is prime or not.

But there is a different kind of algorithmic processes, that is far more common in our daily life. These are algorithms that have been chosen to find a solution to some task, that others would probably have done in a different way. Although obvious value judgments done by calculation like credit scoring and rating immediately come to our mind, when we think about ethics in the context of calculations. However there is a multitude of “hidden” ethic algorithms that far more pervasive.

On example that I encountered was given by Gary Wolf on the Quantified Self Conference in Amsterdam. Wolf told of his experiment in taking different step-counting gadgets and analyzing the differing results. His conclusion: there is no common concept of what is defined as “a step”. And he is right. The developers of the different gadgets have arbitrarily chosen one or another method to map the data collected by the gadgets’ gyroscopic sensors into distinct steps to be counted.

So the first value judgment comes with choosing a method.

The second way of having to deal with ethics is the setting of parameters.

A good example is given by Kraemer et. al in their paper. In medical imaging technologies like MRI, an image is calculated from data like tiny elecromagnetic distortions. Most doctors (I asked some explicitly) take these images as such (like they have taken photographs without much bothering about the underlying technology before). However, there are many parameters, that the developers of such an algorithmic imaging technology have predefined and that will effect the outcome in an important way. If a blood vessel is already clotted by arteriosclerosis or can be regarded still as healthy is a typical decision where we would like be on the safe side and thus tend to underestimate the volume of the vessel, i.e. prefer a more blurry image, while when a surgeon plans her cut, she might ask for a very sharp image that overestimates the vessel’s volume by trend.

The third value judgment is – as this illustrates – how to deal with uncertainty and misclassification.

This is what we call alpha and beta errors. Most people (especially in business context) concentrate on the alpha error, that is to minimize false positives. But when we take the cost of a misjudgement into account, the false negative often is much more expensive. Employers e.g. tend to look for “the perfect” candidate and by trend turn down applications that raise their doubts. By doing so, it is obvious that they will miss many opportunities for the best hire. The cost to fire someone that was hired under false expectations is far less than the cost of not having the chance in learning about someone at all – who might have been the hidden beauty.

The problem of the two types of errors is, you can’t optimize both simultaneously. So we have to make a decision. This is always a value judgment, always ethical.

 

Ref: Algorithm Ethics – Beautiful Data

Traveling Salesman Problem

For one thing, humans are irrational and prone to habit. When those habits are interrupted, interesting things happen. After the collapse of the I-35 bridge in Minnesota, for example, the number of travelers crossing the river, not surprisingly, dropped; but even after the bridge was restored, researcher David Levinson has noted, traffic levels never got near their previous levels again. Habits can be particularly troublesome for planning fixed travel routes for people, like public buses, as noted in a paper titled, “You Can Lead Travelers to the Bus Stop, But You Can’t Make Them Ride,” by Akshay Vij and Joan Walker of the University of California. “Traditional travel demand models assume that individuals are aware of the full range of alternatives at their disposal,” the paper reads, “and that a conscious choice is made based on a tradeoff between perceived costs and benefits.” But that is not necessarily so.

People are also emotional, and it turns out an unhappy truck driver can be trouble. Modern routing models incorporate whether a truck driver is happy or not—something he may not know about himself. For example, one major trucking company that declined to be named does “predictive analysis” on when drivers are at greater risk of being involved in a crash. Not only does the company have information on how the truck is being driven—speeding, hard-braking events, rapid lane changes—but on the life of the driver. “We actually have built into the model a number of indicators that could be surrogates for dissatisfaction,” said one employee familiar with the program.

This could be a change in a driver’s take-home pay, a life event like a death in the family or divorce, or something as subtle as a driver whose morning start time has been suddenly changed. The analysis takes into account everything the company’s engineers can think of, and then teases out which factors seem correlated to accident risk. Drivers who appear to be at highest risk are flagged. Then there are programs in place to ensure the driver’s manager will talk to a flagged driver.

[…]

Powell’s biggest revelation in considering the role of humans in algorithms, though, was that humans can do it better. “I would go down to Yellow, we were trying to solve these big deterministic problems. We weren’t even close. I would sit and look at the dispatch center and think, how are they doing it?” That’s when he noticed: They are not trying to solve the whole week’s schedule at once. They’re doing it in pieces. “We humans have funny ways of solving problems that no one’s been able to articulate,” he says. Operations research people just punt and call it a “heuristic approach.”

 

Ref: Unhappy Truckers and Other Algorithmic Problems – Nautilus

Corelet – New Programming Language for Cognitive Computing

 

Researchers from IBM are working on a new software front-end for their neuromorphic processor chips. The company is hoping to draw inspiration from its recent successes in “cognitive computing,” a line of R&D that’s best exemplified by Watson, the Jeopardy-playing AI. The new programming language will be necessary because once IBM’s cognitive computers become a reality, they’ll need a completely new one to run them. Many of today’s computers still use programming derived from FORTRAN, a language developed in the 1950s for ENIAC.

The new software runs on a conventional supercomputer, but it simulates the functioning of a massive network of neurosynaptic cores. Each core contains its own network of 256 neurons which function according to a new model in which digital neurons mimic the independent nature of biological neurons. Corelets, the equivalent of “programs,” specify the basic functioning of neurosynaptic cores and can be linked into more complex structures. Each corelet has 256 outputs and inputs, which are used to connect to one another.

“Traditional architecture is very sequential in nature, from memory to processor and back,” explained Dr. Dharmendra Modha in a recent Forbes article. “Our architecture is like a bunch of LEGO blocks with different features. Each corelet has a different function, then you compose them together.”

So, for example, a corelet can detect motion, the shape of an object, or sort images by color. Each corelet would run slowly, but the processing would be in parallel.

IBM has created more than 150 corelets as part of a library that programmers can tap.

Eventually, IBM hopes to create a cognitive computer scaled to 100 trillion synapses.

 

Ref: New Computer Programming Language Imitates The Human Brain – io9
Ref: Cognitive Computing Programming Paradigm: A Corelet Language for Composing Networks of Neurosynaptic Cores – IBM Research [paper]

Algorithms Could Magnify Misbehaviour

We live in the Age of the Algorithm, where computer models save time, money and lives. Gone are the days when labyrinthine formulae were the exclusive domain of finance and the sciences – nonprofit organisations, sports teams and the emergency services are now among their beneficiaries. Even romance is no longer a statistics-free zone.

But the very feature that makes algorithms so valuable – their ability to replicate human decision-making in a fraction of the time – can be a double-edged sword. If the observed human behaviours that dictate how an algorithm transforms input into output are flawed, we risk setting in motion a vicious circle when we hand over responsibility to The Machine.

For one British university, what began as a time-saving exercise ended in disgrace when a computer model set up to streamline its admissions process exposed – and then exacerbated – gender and racial discrimination.

As detailed here in the British Medical Journal, staff at St George’s Hospital Medical School decided to write an algorithm that would automate the first round of its admissions process. The formulae used historical patterns in the characteristics of candidates whose applications were traditionally rejected to filter out new candidates whose profiles matched those of the least successful applicants.

By 1979 the list of candidates selected by the algorithms was a 90-95% match for those chosen by the selection panel, and in 1982 it was decided that the whole initial stage of the admissions process would be handled by the model. Candidates were assigned a score without their applications having passed a single human pair of eyes, and this score was used to determine whether or not they would be interviewed.

Quite aside from the obvious concerns that a student would have upon finding out a computer was rejecting their application, a more disturbing discovery was made. The admissions data that was used to define the model’s outputs showed bias against females and people with non-European-looking names.

Recent developments in the recruitment industry open it up to similar risks. Earlier this year LinkedIn launched a new recommendation service for recruiters, which runs off algorithms similar in their basic purpose to those used at St George’s.

‘People You May Want To Hire’ uses a recruiter or HR professional’s existing and ongoing candidate selection patterns to suggest to them other individuals they might like to consider hiring.

“The People You May Want to Hire feature within LinkedIn Recruiter looks at a wide range of members’ public professional data – like work experience, seniority, skills, location and education – and suggests relevant candidates that may not otherwise show up in a recruiter’s searches on LinkedIn. Gender and ethnicity are not elements we ask for or track anywhere on Recruiter”, said Richard George, corporate communications manager at LinkedIn.

Although gender and race play no part in the process per se, a LinkedIn user’s country of residence could be one criterion used by the model to filter in or out certain candidates. An individual’s high school, LinkedIn connections and – to an extent – the university they attended are just three more examples of essentially arbitrary characteristics that could become more and more significant in candidate selection as a result of the algorithm’s iterative nature.

 

Ref: The problem with algorithms: magnifying misbehaviour – The Guardian