Robots and Elder Care


Sherry Turkle, a professor of science, technology and society at the Massachusetts Institute of Technology and author of the book “Alone Together: Why We Expect More From Technology and Less From Each Other,” did a series of studies with Paro, a therapeutic robot that looks like a baby harp seal and is meant to have a calming effect on patients with dementia, Alzheimer’s and in health care facilities. The professor said she was troubled when she saw a 76-year-old woman share stories about her life with the robot.

“I felt like this isn’t amazing; this is sad. We have been reduced to spectators of a conversation that has no meaning,” she said. “Giving old people robots to talk to is a dystopian view that is being classified as utopian.” Professor Turkle said robots did not have a capacity to listen or understand something personal, and tricking patients to think they can is unethical.


“We are social beings, and we do develop social types of relationships with lots of things,” she said. “Think about the GPS in your car, you talk to it and it talks to you.” Dr. Rogers noted that people developed connections with their Roomba, the vacuum robot, by giving the machines names and buying costumes for them. “This isn’t a bad thing, it’s just what we do,” she said.


As the actor Frank Langella, who plays Frank in the movie, told NPR last year: “Every one of us is going to go through aging and all sorts of processes, many people suffering from dementia,” he said. “And if you put a machine in there to help, the notion of making it about love and buddy-ness and warmth is kind of scary in a way, because that’s what you should be doing with other human beings.”


Ref: Disruptions: Helper Robots Are Steered, Tentatively, to Care for the Aging – The New York Times

Ethic & Virtual Brain

Sandberg quoted Jeremy Bentham who famously said, “The question is not, can they reason? Nor can they talk? But can they suffer?” And indeed, scientists will need to be very sensitive to this point.

Sandberg also pointed out the work of Thomas Metzinger, who back in 2003 argued that it would be deeply horrendously unethical to develop conscious software — software that can suffer.

Metzinger had this to say about the prospect:

What would you say if someone came along and said, “Hey, we want to genetically engineer mentally retarded human infants! For reasons of scientific progress we need infants with certain cognitive and emotional deficits in order to study their postnatal psychological development — we urgently need some funding for this important and innovative kind of research!” You would certainly think this was not only an absurd and appalling but also a dangerous idea. It would hopefully not pass any ethics committee in the democratic world. However, what today’s ethics committees don’t see is how the first machines satisfying a minimally sufficient set of constraints for conscious experience could be just like such mentally retarded infants. They would suffer from all kinds of functional and representational deficits too. But they would now also subjectively experience those deficits. In addition, they would have no political lobby — no representatives in any ethics committee.


Ref: Would it be evil to build a functional brain inside a computer? – io9

TED – Rodney Brooks about Baxter


And I don’t mean robots in terms of companions. I mean robots doing the things that we normally do for ourselves but get harder as we get older. Getting the groceries in from the cars, up the stairs, into the kitchen. Or even, as we get very much older, driving our cars to go visit people. And I think robotics gives people a chance to have dignity as they get older by having control of the robotic solution. So they don’t have to rely on people that are getting scarcer to help them.

Push-Button Culture

Today our abundance of smartphones, computers, dishwashers and electric vacuum cleaners all supposedly leave more time for the 21st century human to lounge around and eat bonbons. Just push a button, and everything is automatic.


Doing the laundry in 1950 may have become much easier thanks to the rise of electric washing machines, but the societal expectations around how often one’s clothes should be cleaned shifted dramatically since, say, 1900. Cleaning a floor was decidedly harder in 1910 than it was in 1960 but the relative ease of use for appliances like the electric vacuum cleaner changed American expectations about what constituted “clean.”


But the promise of the push-button as the gateway to a life of leisure has its origins far earlier than the Space Age. Dating back to the late 19th and early 20th century, the push-button quickly evolved into the perfectly simple symbol of modernity back when electricity itself was first being introduced to American homes.

Whether it was for ringing doorbells, illuminating lamps, hailing domestic servants, or turning on any number of new electrical appliances, the push button arrived in full force as an interface that was supposed to save time and generally make life easier. Just push a button, and it’s all done automatically!

As Americans came into contact with more and more machines in the late 19th century, the push button was supposed to ameliorate heightened anxieties about the complexity of life—what Rachel Plotnick describes as a “pervasive cultural craving for efficient relationships between humans and machines.” Plotnick writes about the button as the interface of leisure in her 2012 paper “At the Interface: The Case of the Electric Push Button, 1880-1923.”

Ref: Will The Internet of Things Make Our Lives Any Easier? – Paleofuture
Ref: Push-Button Promises – psmag

Driver Behavior in an Emergency Situation in the Automated Highway System

Twenty participants completed test rides in a normal and an Automated Highway System (AHS) vehicle in a driving simulator. Three AHS conditions were tested: driving in a platoon of cars at 1 sec and at 0.25 sec time headway and driving as a platoon leader. Of particular interest was overreliance on the automated system, which was tested in an emergency condition where the automated system failed to function properly and the driver actively had to take over speed control to avoid an uncomfortable short headway of 0.1 m. In all conditions driver behavior and heart rate were registered, and ratings of activation, workload, safety, risk, and acceptance of the AHS were collected after the test rides. Results show lower physiological and subjectively experienced levels of activation and mental effort in conditions of automated driving. In the emergency situation, only half of the participants took over control, which supports the idea that AHS, as any automation, is susceptible to complacency.


Ref: What Will Happen When Your Driverless Car Crashes? – Paleofuture

Legal Battle for Robot-Assisted Medicine

The da Vinci surgical robot (or, more accurately, its maker) was acquitted on Friday in the case of a man who died in 2012 after a botched robotic surgery four years earlier. The jury voted 10-2 in favor of Intuitive, the maker of the da Vinci, but you can rest assured this won’t be the last legal battle for robot-assisted medicine.


Ref: The Futuristic Robot Surgeons of 1982 Have Arrived – Paleofuture