How Relying on Algorithms and Bots Can Be Really, Really Dangerous

 

[…] Some are quite prosaic, like the welter of new gadgets that try to “nudge” us into better behavior. In his new book To Save Everything, Click Here, Evgeny Morozov casts a skeptical eye on this stuff. He tells me about a recent example he’s seen: A smart fork that monitors how much you’re eating and warns you if you’re overdoing it.

Fun and useful, you might argue. But for Morozov, tools like the fork reduce your incentive to think about how you’re eating, and the deeper political questions of why today’s food ecosystem is so enfattening. “Instead of regulating the food industry to make food healthier,” Morozov says, “we’re giving people smart forks.”

Or as Evan Selinger, a philosopher at Rochester Institute of Technology, puts it, tools that make hard things easy can make us less likely to tolerate things that are hard. Outsourcing our self-control to “digital willpower” has consequences: Use Siri constantly to get instant information and you can erode your ability to be patient in the face of incomplete answers, a crucial civic virtue.

[…]

And this, really, is the core of the question here: Efficiency isn’t always a good thing. Tech lets us do things more easily. But this can mean doing them less reflectively too.

We’re not going to throw out all technology, nor should we. Efficiency isn’t always bad. But Morozov suggests that sometimes tools should do the opposite—they should introduce friction. For example, new parking meters reset when you drive away, so another driver can’t draft off of any remaining time. The city makes more money, obviously, but that design also compels your behavior. What if a “smart” meter instead offered you a choice: Gift remaining time to the next driver or to the city? This would foreground the tiny moral trade-offs of daily life—city versus citizen.

Or consider the Caterpillar, a prototype power strip created by German designers that detects when a plugged-in device is in standby mode. Instead of turning off the device—a traditional efficiency move—the Caterpillar leaves it on, but starts writhing. The point is to draw your attention to your power usage, to force you to turn it off yourself and meditate on why you’re using so much.

These are kind of crazy, of course. They’re not tools that solve problems. They’re tools to make you think about problems—which is precisely the point.

 

Ref: How Relying on Algorithms and Bots Can Be Really, Really Dangerous – Wired

PatientsLikeMe

 

Nearly 200,000 PatientsLikeMe members have created and shared their own medical records, often using standardized questionnaires or tests they conducted themselves. The new platform will include tools for developing standardized measurements for additional diseases, tools to evaluate and refine those measurements, and mechanisms for licensing the data and for open-sourcing the measurements used to collect the data under a Creative Commons license.

The plan, announced at the TED Conference Monday, is to rapidly accelerate the spread of medical data now hoarded by private companies, locked down by privacy laws, and collected using often proprietary and commercially licensed measurement systems.

 

Ref: Social Network Could Revolutionize Disease Treatment – Wired
Ref: PatientsLikeMe

Facebook Likes Can Be Used to Determine User’s Personality

That might go without saying, but the brainiacs at the University of Cambridge Psychometrics Center and Microsoft Research Cambridge have the data to prove it – and a lot of other things about you, too. They analyzed the Likes of 58,466 volunteers and were able to determine with surprisingly high accuracy a range of personal information that some Facebook users may not have made public, including their sexuality, where they worship, how they’ll vote in the next election and what their IQ is.

Simply by delving into volunteers’ Likes, the researchers could determine in 95 percent of cases whether a person was Caucasian or African American and in 88 percent of cases whether the person was heterosexual or homosexual. They could determine whether the person is Christian or Islamic 82 percent of the time.

The researchers described Facebook Likes as “a generic class of digital record that could be used to extract sensitive information.” Volunteers used the myPersonality Facebook app to track their Likes, which were fed into algorithms to arrive at the results. The data were supported by information from volunteer profiles and personality tests.

Personality trait and predictive Likes, according to the study

High IQ

The Godfather

Lord of the Rings

The Daily Show

Low IQ

Harley Davidson

I Love Being A Mom

Tyler Perry

Emotional stability – neurotic

Emo

Dot Dot Curve

So So Happy

Emotional stability – calm and relaxed

Business administration

Climbing

Getting Money

Homosexual males

Wicked the Musical

No H8 Campaign

Human Rights Campaign

Homosexual Females

Not Being Pregnant

The L Word

Sometimes I Just Lay In Bed and Think About Life

Parents separated at 21

I’m Sorry I Love You

Never Apologize For What You Feel

It’s Like Saying Sorry For Being Real

When Ur Single, All U See Is Happy Couples N Wen Ur In A Relationship All U See Is Happy Singles

Parents did not separate at 21

Apples To Apples: The Helen Keller Card

Gene Wilder

Making Dirty Innuendos Out Of Perfectly Innocent Things

 

Ref: Study: Facebook Likes Can Be Used to Determine Intelligence, Sexuality – Wired
Ref: Facebook users unwittingly revealing intimate secrets, study finds – The Guardian

Navy Identifies Software Algorithms As a Major Point of Concern For 2013

Enter the Office of Naval Research. One of its new special program announcements for 2013 identifies software algorithms as a major point of concern: It wants more robust logic tools play nicely across hardware and software platforms, pre-assembling a mosaic of threats. Don’t bother writing them better search tools for sifting through their data archives: The Navy expressly rules that out. It wants the imaging equipment of pre-cut vegetables in a salad bag.

[…]

A related effort, called Automated Image Understanding, gets more explicit. It’s about “detection and tracking of objects on water or in urban areas and inferring the threat level they may pose” — sharply enough that the algorithm should be able to pick out “partially occluded objects in urban clutter.” All this has to happen in real time.

 

Ref: Navy Wants You to Write Algorithms That Automatically ID Threats – Wired

Mobile Urine Lab

 

Dubbed Uchek, what Ingawale has created is a seemingly simple app that analyzes chemical strips by first taking photos with your phone at predetermined times and comparing the results that appear on the pee-soaked strip to a color-coded map.

With the color comparisons as a guide, the app analyzes the results, and comes back in seconds with a breakdown of the levels of glucose, bilirubin, proteins, specific gravity, ketones, leukocytes, nitrites, urobilinogen and hematuria present in the urine. The parameters the app measures are especially helpful for those people managing diabetes, and kidney, bladder and liver problems, or ferreting out the presence of a urinary tract infection.

In use, the app delivers information that everyone can understand, returning either positive or negative results, numbers, or descriptors like “trace” or “large.” If you don’t know that the presence of leukocytes might indicate a urinary tract infection, you simply tap on the leukocytes tab for more information. “The idea is to get people closer to their own information,” Ingawale, 29, says. “I want people to better understand what is going on with their bodies.”

 

Ref: New App Turns Your iPhone Into a Mobile Urine Lab – Wired
Ref: Uchek 

Google Glass Will Identify People by Clothing Choices

 

A new technology built into the device,dug up by New Scientist, takes Google Glass from interesting to down right creepy. Google Glass can now pick a person out of crowd based on their fashion style.

The system, InSight, developed in partnership with Google, will take a nice little moment to assess the clothing in frame, and then point out exactly where your friends are in busy settings like a bar, concert, or sporting event. It could probably point you out in a protest or shopping mall too.

 

Ref: Creepier By The Minute: Google Glass Will Identify People By Clothing Choices – Macgasm

Perfect Speech Recognition Using Crowdsourcing

Analyzing speech and improving speech-to-text machines has been a hobby horse for Darpa in recent years. But this takes it a step further, in exploring the ways crowdsourcing can make it possible for our speech to be recorded and stored forever. But it’s not just about better recordings of what you say. It’ll lead to more recorded conversations, quickly transcribed and then stored in perpetuity — like a Twitter feed or e-mail archive for everyday speech. Imagine living in a world where every errant utterance you make is preserved forever.

University of Texas computer scientist Matt Lease has studied crowdsourcing for years, including for an earlier Darpa project called Effective Affordable Reusable Speech-to-text, or EARS, which sought to boost the accuracy of automated transcription machines. His work has also attracted enough attention for Darpa to award him a $300,000 award over two years to study the new project, called “Blending Crowdsourcing with Automation for Fast, Cheap, and Accurate Analysis of Spontaneous Speech.” The project envisions a world that is both radically transparent and a little freaky.

 

Ref: DARPA: Perfect Speech Recognition, Conversations Stored Forever – DarkGovernment

Rapyuta: The RoboEarth Cloud Engine

 

European scientists have turned on the first part of a web-based database of information to help them cope.

Called Rapyuta, the online “brain” describes objects robots have met and can also carry out complicated computation on behalf of a robot.

Rapyuta’s creators hope it will make robots cheaper as they will not need all their processing power on-board.

 

Ref: Web Database for Robots Comes Online – DarkGovernment
Ref: Rapyuta

Algorithmic Rape Jokes in Amazon

 

A t-shirt company called Solid Gold Bomb was caught selling shirts with the slogan “KEEP CALM and RAPE A LOT” on them. They also sold shirts like “KEEP CALM and CHOKE HER” and “KEEP CALM and PUNCHHER”. The Internet—especially the UK Internet—exploded.

How did this happen?

“Algorithms!”

[…]

Pete Ashton argues that—because the jokes were generated by a misbehaving script—“as mistakes go it’s a fairly excusable one, assuming they now act on it”. He suggests that the reason people got so upset was a lack of digital literacy. I suggest that the reason people got upset was that a company’s shoddy QA practices allowed a rape joke to go live.

Anyone who’s worked with software should know that the actual typing of code is a relatively small part of the overall programming work. Designing the program before you start coding, and debugging it after you’ve created it is the bulk of the job.

Generative programs are force multipliers. Small initial decisions can have massive consequences. The greater your reach, the greater your responsibility to manage your output. When Facebook makes an error that affects 0.1% of users, it means 1 million people got fucked up.

‘We didn’t cause a rape joke to happen, we allowed a rape joke to happen,’ is not a compelling excuse. It betrays a lack of digital literacy.

Interesting comments from people:

People, enough of the ‘A big algorithm did it and ran away’ explanations (eg. http://iam.peteashton.com/keep-calm-rape-tshirt-amazon/ …) – algorithms have politics too – @gsvoss

 

I’m REALLY tired of the “it’s the computer program” excuse for inexcusable behaviour. Behind every computer algorithm, a human being is sitting there programming. Use your “real” brains, you idiots, and join the real world. There are no excuses for this. None. Period. – jen

 

Not good enough, I’m afraid. The same company are still selling a t-shirt that says ‘Keep calm and hit her’.
No computer generated that. Why, for example, doesn’t it say ‘hit him’?
Because someone ran an eye over it to ensure it was sufficiently ‘funny’ I would say.
If they were genuinely horrified by what their algorithm produced that t-shirt would be gone too. Seems to me they’re just a bunch of sad gits. – Ita Ryan

 

 

Ref: Algorithmic Rape Jokes in the Library of Babel – QuietBabylon (via algopop)