Algorithms Could Magnify Misbehaviour

We live in the Age of the Algorithm, where computer models save time, money and lives. Gone are the days when labyrinthine formulae were the exclusive domain of finance and the sciences – nonprofit organisations, sports teams and the emergency services are now among their beneficiaries. Even romance is no longer a statistics-free zone.

But the very feature that makes algorithms so valuable – their ability to replicate human decision-making in a fraction of the time – can be a double-edged sword. If the observed human behaviours that dictate how an algorithm transforms input into output are flawed, we risk setting in motion a vicious circle when we hand over responsibility to The Machine.

For one British university, what began as a time-saving exercise ended in disgrace when a computer model set up to streamline its admissions process exposed – and then exacerbated – gender and racial discrimination.

As detailed here in the British Medical Journal, staff at St George’s Hospital Medical School decided to write an algorithm that would automate the first round of its admissions process. The formulae used historical patterns in the characteristics of candidates whose applications were traditionally rejected to filter out new candidates whose profiles matched those of the least successful applicants.

By 1979 the list of candidates selected by the algorithms was a 90-95% match for those chosen by the selection panel, and in 1982 it was decided that the whole initial stage of the admissions process would be handled by the model. Candidates were assigned a score without their applications having passed a single human pair of eyes, and this score was used to determine whether or not they would be interviewed.

Quite aside from the obvious concerns that a student would have upon finding out a computer was rejecting their application, a more disturbing discovery was made. The admissions data that was used to define the model’s outputs showed bias against females and people with non-European-looking names.

Recent developments in the recruitment industry open it up to similar risks. Earlier this year LinkedIn launched a new recommendation service for recruiters, which runs off algorithms similar in their basic purpose to those used at St George’s.

‘People You May Want To Hire’ uses a recruiter or HR professional’s existing and ongoing candidate selection patterns to suggest to them other individuals they might like to consider hiring.

“The People You May Want to Hire feature within LinkedIn Recruiter looks at a wide range of members’ public professional data – like work experience, seniority, skills, location and education – and suggests relevant candidates that may not otherwise show up in a recruiter’s searches on LinkedIn. Gender and ethnicity are not elements we ask for or track anywhere on Recruiter”, said Richard George, corporate communications manager at LinkedIn.

Although gender and race play no part in the process per se, a LinkedIn user’s country of residence could be one criterion used by the model to filter in or out certain candidates. An individual’s high school, LinkedIn connections and – to an extent – the university they attended are just three more examples of essentially arbitrary characteristics that could become more and more significant in candidate selection as a result of the algorithm’s iterative nature.

 

Ref: The problem with algorithms: magnifying misbehaviour – The Guardian