Google’s Autocomplete – Negative Stereotyped Search

 

I am not implying the negative stereotyped search term suggestions about women are Google’s intent – I rather suspect a coordinated bunch of MRAs are to be blamed for the volume of said search terms – but that doesn’t mean Google is completely innocent. The question of accountability goes beyond a binary option of intentionality or complete innocence.

Unsurprisingly, Google doesn’t take any responsiblity. It puts the blame on its own algorithms… as if the algorithms were beyond the company’s control.

The Spiegel wrote (about another autocompletion affair):

The company maintains that the search engine only shows what exists. It’s not its fault, argues Google, if someone doesn’t like the computed results. […]
Google increasingly influences how we perceive the world. […] Contrary to what the Google spokesman suggests, the displayed search terms are by no means solely based on objective calculations. And even if that were the case, just because the search engine means no harm, it doesn’t mean that it does no harm.

If we, as a society, do not want negative stereotypes (be they sexist, racist, ablist or otherwise discriminatory) to prevail in Google’s autocompletion, where can we locate accountability? With the people who first asked stereotyping questions? With the people who asked next? Or with the people who accepted Google’s suggestion to search for the stereotyping questions instead of searching what they originally intended? What about Google itself?

Of course, algorithms imply automation. And digital literacy helps understanding the process of automatation – I have been saying this before – but Algorithms are more than a technological issue: they involve not only automated data analysis, but also decision-making (cf. “Governing Algorithms: A Provocation Piece” #21. No, actually you should not only read #21 but the whole, very thoughtprovoking provokation piece!). Which makes it impossible to ignore the question whether algorithms can be accountable.

In a recent Atlantic article, advocating reverse engineering, N. Diakopoulos asserts:

[…] given the growing power that algorithms wield in society it’s vital to continue to develop, codify, and teach more formalized methods of algorithmic accountability.

Which I think would be a great thing because, at the very least, this will raise awareness. (I don’t agree that “algorithmic accountability” can be assigned à priori, though). But when algorithms are not accountable, then who is? The people/organization/company creating them? The people/organization/company deploying them? Or the people/organization/company using them? This brings us back to the conclusion that the question of accountability goes beyond a binary option of intentionality or complete innocence… which makes the whole thing an extremely complex issue.

Who is in charge when algorithms are in charge?

 

Ref: Google’s autocompletion: algorithms, stereotypes and accountability – Sociostrategy
Ref: Google’s autocomplete spells out our darkest thoughts – The Guardian