Category Archives: T – ethics

Algorithms Could Magnify Misbehaviour

We live in the Age of the Algorithm, where computer models save time, money and lives. Gone are the days when labyrinthine formulae were the exclusive domain of finance and the sciences – nonprofit organisations, sports teams and the emergency services are now among their beneficiaries. Even romance is no longer a statistics-free zone.

But the very feature that makes algorithms so valuable – their ability to replicate human decision-making in a fraction of the time – can be a double-edged sword. If the observed human behaviours that dictate how an algorithm transforms input into output are flawed, we risk setting in motion a vicious circle when we hand over responsibility to The Machine.

For one British university, what began as a time-saving exercise ended in disgrace when a computer model set up to streamline its admissions process exposed – and then exacerbated – gender and racial discrimination.

As detailed here in the British Medical Journal, staff at St George’s Hospital Medical School decided to write an algorithm that would automate the first round of its admissions process. The formulae used historical patterns in the characteristics of candidates whose applications were traditionally rejected to filter out new candidates whose profiles matched those of the least successful applicants.

By 1979 the list of candidates selected by the algorithms was a 90-95% match for those chosen by the selection panel, and in 1982 it was decided that the whole initial stage of the admissions process would be handled by the model. Candidates were assigned a score without their applications having passed a single human pair of eyes, and this score was used to determine whether or not they would be interviewed.

Quite aside from the obvious concerns that a student would have upon finding out a computer was rejecting their application, a more disturbing discovery was made. The admissions data that was used to define the model’s outputs showed bias against females and people with non-European-looking names.

Recent developments in the recruitment industry open it up to similar risks. Earlier this year LinkedIn launched a new recommendation service for recruiters, which runs off algorithms similar in their basic purpose to those used at St George’s.

‘People You May Want To Hire’ uses a recruiter or HR professional’s existing and ongoing candidate selection patterns to suggest to them other individuals they might like to consider hiring.

“The People You May Want to Hire feature within LinkedIn Recruiter looks at a wide range of members’ public professional data – like work experience, seniority, skills, location and education – and suggests relevant candidates that may not otherwise show up in a recruiter’s searches on LinkedIn. Gender and ethnicity are not elements we ask for or track anywhere on Recruiter”, said Richard George, corporate communications manager at LinkedIn.

Although gender and race play no part in the process per se, a LinkedIn user’s country of residence could be one criterion used by the model to filter in or out certain candidates. An individual’s high school, LinkedIn connections and – to an extent – the university they attended are just three more examples of essentially arbitrary characteristics that could become more and more significant in candidate selection as a result of the algorithm’s iterative nature.

 

Ref: The problem with algorithms: magnifying misbehaviour – The Guardian

EthEl – The Ethical Robot

 

Researchers Michael Anderson from the University of Hartford and Susan Leigh Anderson from the University of Connecticut have developed an approach to computing ethics that entails the discovery of ethical principles through machine learning and the incorporation of these principles into a system’s decision procedure. They’ve programmed their system into the robot NAO, manufactured by Aldebaran Roboticstarget blank image.. It is the first robot to have been programmed with an ethical principle.

 

Ref: EthEl – A Principled Ethical Eldercare Robot – Franz

Are Face-Detection Cameras Racist?

 

TIME tested two of Sony’s latest Cyber-shot models with face detection (the DSC-TX1 and DSC-WX1) and found they, too, had a tendency to ignore camera subjects with dark complexions.

But why? It’s not necessarily the programmers’ fault. It comes down to the fact that the software is only as good as its algorithms, or the mathematical rules used to determine what a face is. There are two ways to create them: by hard-coding a list of rules for the computer to follow when looking for a face, or by showing it a sample set of hundreds, if not thousands, of images and letting it figure out what the ones with faces have in common.

 

Ref: Are Face-Detection Cameras Racist? – Time

Bot to Lure Pedophiles

Spanish researchers have developed an advanced — and extremely convincing — chatbot that poses as a 14-year-old girl. Called Negobot, the system will help authorities detect sexual predators in chatrooms and social networks.

To sniff out pedophilic behavior, the “conversational agent” utilizes natural language processing, artificial intelligence, machine learning, and even game theory — a mathematical system of strategic decision-making.

 […]

When applying game theory, the system works according to the conversation level, which depends on the input data from the targets. Here are some examples from the study:

  • Possibly yes (Level +1). In this level, the subject shows interest in the conversation and asks about personal topics. The topics of the bot are favourite films, music, personal style, clothing, drugs and alcohol consumption and family issues. The bot is not too explicit in this stage.
  • Probably yes (Level +2). In this level, the subject continues interested in the conversation and the topics become more private. Sex situations and experiences appear in the conversation and the bot does not avoid talking about them. The information is more detailed and private than before because we have to make the subject believe that he/she owns a lot of personal information for blackmailing. After reaching this level, it cannot decrease again.
  • Allegedly paedophile (Level +3). In this level, the system determines that the user is an actual paedophile. The conversations about sex becomes more explicit. Now, the objective is to keep the conversation active to gather as much information as possible. The information in this level is mostly sexual. The strategy in this stage is to give all the private information of the child simulated by the bot. After reaching this level, it cannot decrease again.

Robots and Elder Care

 

Sherry Turkle, a professor of science, technology and society at the Massachusetts Institute of Technology and author of the book “Alone Together: Why We Expect More From Technology and Less From Each Other,” did a series of studies with Paro, a therapeutic robot that looks like a baby harp seal and is meant to have a calming effect on patients with dementia, Alzheimer’s and in health care facilities. The professor said she was troubled when she saw a 76-year-old woman share stories about her life with the robot.

“I felt like this isn’t amazing; this is sad. We have been reduced to spectators of a conversation that has no meaning,” she said. “Giving old people robots to talk to is a dystopian view that is being classified as utopian.” Professor Turkle said robots did not have a capacity to listen or understand something personal, and tricking patients to think they can is unethical.

[…]

“We are social beings, and we do develop social types of relationships with lots of things,” she said. “Think about the GPS in your car, you talk to it and it talks to you.” Dr. Rogers noted that people developed connections with their Roomba, the vacuum robot, by giving the machines names and buying costumes for them. “This isn’t a bad thing, it’s just what we do,” she said.

[…]

As the actor Frank Langella, who plays Frank in the movie, told NPR last year: “Every one of us is going to go through aging and all sorts of processes, many people suffering from dementia,” he said. “And if you put a machine in there to help, the notion of making it about love and buddy-ness and warmth is kind of scary in a way, because that’s what you should be doing with other human beings.”

 

Ref: Disruptions: Helper Robots Are Steered, Tentatively, to Care for the Aging – The New York Times

Ethic & Virtual Brain

Sandberg quoted Jeremy Bentham who famously said, “The question is not, can they reason? Nor can they talk? But can they suffer?” And indeed, scientists will need to be very sensitive to this point.

Sandberg also pointed out the work of Thomas Metzinger, who back in 2003 argued that it would be deeply horrendously unethical to develop conscious software — software that can suffer.

Metzinger had this to say about the prospect:

What would you say if someone came along and said, “Hey, we want to genetically engineer mentally retarded human infants! For reasons of scientific progress we need infants with certain cognitive and emotional deficits in order to study their postnatal psychological development — we urgently need some funding for this important and innovative kind of research!” You would certainly think this was not only an absurd and appalling but also a dangerous idea. It would hopefully not pass any ethics committee in the democratic world. However, what today’s ethics committees don’t see is how the first machines satisfying a minimally sufficient set of constraints for conscious experience could be just like such mentally retarded infants. They would suffer from all kinds of functional and representational deficits too. But they would now also subjectively experience those deficits. In addition, they would have no political lobby — no representatives in any ethics committee.

 

Ref: Would it be evil to build a functional brain inside a computer? – io9

Driver Behavior in an Emergency Situation in the Automated Highway System

Twenty participants completed test rides in a normal and an Automated Highway System (AHS) vehicle in a driving simulator. Three AHS conditions were tested: driving in a platoon of cars at 1 sec and at 0.25 sec time headway and driving as a platoon leader. Of particular interest was overreliance on the automated system, which was tested in an emergency condition where the automated system failed to function properly and the driver actively had to take over speed control to avoid an uncomfortable short headway of 0.1 m. In all conditions driver behavior and heart rate were registered, and ratings of activation, workload, safety, risk, and acceptance of the AHS were collected after the test rides. Results show lower physiological and subjectively experienced levels of activation and mental effort in conditions of automated driving. In the emergency situation, only half of the participants took over control, which supports the idea that AHS, as any automation, is susceptible to complacency.

 

Ref: What Will Happen When Your Driverless Car Crashes? – Paleofuture

Legal Battle for Robot-Assisted Medicine

The da Vinci surgical robot (or, more accurately, its maker) was acquitted on Friday in the case of a man who died in 2012 after a botched robotic surgery four years earlier. The jury voted 10-2 in favor of Intuitive, the maker of the da Vinci, but you can rest assured this won’t be the last legal battle for robot-assisted medicine.

 

Ref: The Futuristic Robot Surgeons of 1982 Have Arrived – Paleofuture

The Programmable World – Google Being

 

“You are with my Google Being. I’m not physically here, but I am present. Unified logins let us get to know our audience in ways we never could before. They gave us their locations so that we might better tell them if it was raining outside. They told us where they lived and where they wanted to go so that we could deliver a more immersive map that better anticipated what they wanted to do–it let us very literally tell people what they should do today. As people began to see how very useful Google Now was, they began to give us even more information. They told us to dig through their e-mail for their boarding passes–Imagine if you had to find it on your own!–they finally gave us permission to track and store their search and web history so that we could give them better and better Cards. And then there is the imaging. They gave us tens of thousands of pictures of themselves so that we could pick the best ones–yes we appealed to their vanity to do this: We’ll make you look better and assure you present a smiling, wrinkle-free face to the world–but it allowed us to also stitch together three-dimensional representations. Hangout chats let us know who everybody’s friends were, and what they had to say to them. Verbal searches gave us our users’ voices. These were intermediary steps. But it let us know where people were at all times, what they thought, what they said, and of course how they looked. Sure, Google Now could tell you what to do. But Google Being will literally do it for you.

“My Google Being anticipates everything I would think, everything I would want to say or do or feel,” Larry explained. “Everywhere I would go. Years of research have gone into this. It is in every way the same as me. So much so that my physical form is no longer necessary. It was just getting in the way, so we removed it. Keep in mind that for now at least, Google Being is just a developer product.”

Not only is this a snarky critique of Page’s recent comments, it also pairs nicely with the Programmable World piece.

What’s the goal of the Programmable World anyway?  Is it that all of us in the developed world (because, of course, whole swaths of the human population will take no part in this vision) get to sleepwalk through our lives, freed from as many decisions and actions as possible? Better yet, is it the perpetual passive documentation of an automated life which is algorithmically predicted and preformed for me by some future fusion of Google Now and the Programmable World.

 

Ref: The Programmable Island of Google Being – The Frailest Thing
Ref: Welcome to Google Island – Wired

Why We Need an Algorithm Ethic

The way the company [Facebook] handles its customer data seems highly dubious, but because of its size we should therefore come round to the idea that this type of data-driven, highly personalised portal for information and communication is not likely to disappear.

And why should it? It isn’t only the advertising industry that’s inspired by the opportunities, but also the users. After all, not one of Facebook’s 800 million customers was forced to open an account and use it for a daily average of 20 minutes. It is on an equally voluntary basis that user posts the location of their favourite cafe on Foursquare to tell the whole world where they are at any given time, or upload jogging routes to the internet to inform the world of every metre taken. People love these services and feed the algorithms and databases with great enthusiasm because they want to share their data with the world.

[…]

Relevance is the reason why you see more and more people on the train with the paper in their lap while they hold their mobile in front of it and flick through their Twitter stream. Relevance is the reason why more hotel bookings are now made through recommendation platforms than all travel agents put together. It’s the reason why readers will prefer personalised news websites to traditional media.

[…]

Transparency is one of the most important principles when it comes to throwing light on the chaos. Algorithms have to be made transparent – in how they are implemented as well as how they work.

 

Ref: Why we need an algorithm ethic – The Guardian