Algorithms Are Great and All, But They Can Also Ruin Lives

On April 5, 2011, 41-year-old John Gass received a letter from the Massachusetts Registry of Motor Vehicles. The letter informed Gass that his driver’s license had been revoked and that he should stop driving, effective immediately. The only problem was that, as a conscientious driver who had not received so much as a traffic violation in years, Gass had no idea why it had been sent.

After several frantic phone calls, followed up by a hearing with Registry officials, he learned the reason: his image had been automatically flagged by a facial-recognition algorithm designed to scan through a database of millions of state driver’s licenses looking for potential criminal false identities. The algorithm had determined that Gass looked sufficiently like another Massachusetts driver that foul play was likely involved—and the automated letter from the Registry of Motor Vehicles was the end result.

The RMV itself was unsympathetic, claiming that it was the accused individual’s “burden” to clear his or her name in the event of any mistakes, and arguing that the pros of protecting the public far outweighed the inconvenience to the wrongly targeted few.

John Gass is hardly alone in being a victim of algorithms gone awry. In 2007, a glitch in the California Department of Health Services’ new automated computer system terminated the benefits of thousands of low-income seniors and people with disabilities. Without their premiums paid, Medicare canceled those citizens’ health care coverage.

[…]

Equally alarming is the possibility that an algorithm may falsely profile an individual as a terrorist: a fate that befalls roughly 1,500 unlucky airline travelers each week. Those fingered in the past as the result of data-matching errors include former Army majors, a four-year-old boy, and an American Airlines pilot—who was detained 80 times over the course of a single year.

[…]

“We are all so scared of human bias and inconsistency,” says Danielle Citron, professor of law at the University of Maryland. “At the same time, we are overconfident about what it is that computers can do.”

The mistake, Citron suggests, is that we “trust algorithms, because we think of them as objective, whereas the reality is that humans craft those algorithms and can embed in them all sorts of biases and perspectives.” To put it another way, a computer algorithm might be unbiased in its execution, but, as noted, this does not mean that there is not bias encoded within it.

 

Ref: Algorithms Are Great and All, But They Can Also Ruin Lives – Wired