Yet, some people, often as the result of traumatic experiences or neglect, don’t experience these fundamental social feelings normally. Could a machine teach them these quintessentially human responses? A thought-provoking Brazilian study recently published in PLoS One suggests it could.
Researchers at the D’Or Institute for Research and Education outside Rio de Janeiro, Brazil, performed functional MRI scans on healthy young adults while asking them to focus on past experience that epitomized feelings of non-sexual affection or pride of accomplishment. They set up a basic form of artificial intelligence to categorize, in real time, the fMRI readings as affection, pride or neither. They then showed the experiment group a graphic form of biofeedback to tell them whether their brain results were fully manifesting that feeling; the control group saw the meaningless graphics.
The results demonstrated that the machine-learning algorithms were able to detect complex emotions that stem from neurons in various parts of the cortex and sub-cortex, and the participants were able to hone their feelings based on the feedback, learning on command to light up all of those brain regions.
Here we must pause to note that the experiment’s artificial intelligence system’s likeness to the “empathy box” in “Blade Runner” and the Philip K. Dick story on which it’s based did not escape the researchers. Yes, the system could potentially be used to subject a person’s inner feelings to interrogation by intrusive government bodies, which is really about as creepy as it gets. It could, to cite that other dystopian science fiction blockbuster, “Minority Report,” identify criminal tendencies and condemn people even before they commit crimes.
Even before birth, concerned parents often fret over the possibility that their children may have underlying medical issues. Chief among these worries are rare genetic conditions that can drastically shape the course and reduce the quality of their lives. While progress is being made in genetic testing, diagnosis of many conditions occurs only after symptoms manifest, usually to the shock of the family.
A new algorithm, however, is attempting to identify specific syndromes much sooner by screening photos for characteristic facial features associated with specific genetic conditions, such as Down’s syndrome, Progeria, and Fragile X syndrome.
Nellåker added, “A doctor should in future, anywhere in the world, be able to take a smartphone picture of a patient and run the computer analysis to quickly find out which genetic disorder the person might have.”
IBM will team up with the New York Genome Center to see whether Watson can use glioblastoma patients’ genetic information to prescribe custom-tailored treatments. This “personalized” approach to medicine has been hailed as the next big step in healthcare, but making sense of genetic data – winnowing useful information from impertinent nucleic chaff – can be an overwhelming task. That’s where Watson comes in.
[New York Genome Center CEO Rober Darnell] said that the project would start with 20 to 25 patients… Samples from those patients (including both healthy and cancerous tissue) would be subjected to extensive DNA sequencing, including both the genome and the RNA transcribed from it. “What comes out is an absolute gusher of information,” he said.
It should theoretically be possible to analyze that data and use it to customize a treatment that targets the specific mutations present in tumor cells. But right now, doing so requires a squad of highly trained geneticists, genomics experts, and clinicians. It’s a situation that Darnell said simply can’t scale to handle the patients with glioblastoma, much less other cancers.
Instead, that gusher of information is going to be pointed at Watson. John Kelly of IBM Research stepped up to describe Watson as a “cognitive system,” one that “mimics the capabilities of the human mind—some, but not all [capabilities].” The capabilities it does have include ingesting large volumes of information, identifying the information that’s relevant, and then learning from the results of its use. Kelley was extremely optimistic that Watson could bring new insights to cancer care. “We will have an impact on cancer and these other horrific diseases,” he told the audience. “It’s not a matter of if, it’s a matter of when—and the when is going to be very soon.”
IBM’s Watson supercomputer may be boning up on its medical bona fides, but the concept of Dr. Watson is nothing new. We’ve been waiting on our super-smart computer doctors of tomorrow for over 30 years.
The 1982 book World of Tomorrow: Health and Medicine by Neil Ardley showed kids of the 1980s what the doctor’s office of the future was going to look like. The room is filled with automatic diagnosis stations, prescription vending machines, and plenty of control panels sporting colorful buttons. The only thing that’s missing is, well, a doctor.
From the book:
A visit to the doctor in the future is likely to resemble a computer game, for computers will be greatly involved in medical care. Now doctors have to question and examine their patients to find out what is wrong with them. They compare the patients’ answers and the examination results with their own knowledge of medical conditions and illnesses. This enables doctors to decide on the causes of the patients’ problems.
Computers can store huge amounts of medical information. Doctors are therefore likely to use computers to help them find the causes of illnesses. The computer could take over completely, allowing doctors to concentrate on patients who need personal care.
The computer won’t just be a dumb machine that’s fed info. The robo-doctor of tomorrow will be able to ask questions of the patient, narrowing down all the possible things that could be wrong.
The computer will question the patient about an illness just as the doctor does now. It will either display words on a screen or speak to the patient, who will reply or operate a keyboard to answer. The questions will continue until the computer has either narrowed down the possible causes of the illness to one or needs more information that the patient cannot give by answering.
The patient will then go to a machine that checks his or her physical condition. It will measure such factors as pulse, temperature and blood pressure and maybe look into the interior of the patient’s body. The results will go to the computer. This may still not provide the computer with enough information about the patient, and it may need to take samples — for example, of blood or hair. It will do this painlessly.
The September 30 issue of TIME will profile Page and his decision to launch Calico. From the magazine’s preview article:
“Based in the Bay Area, not far from Google’s headquarters, Calico will be making longer-term bets than most health care firms. “In some industries, it takes ten or 20 years to go from an idea to something being real. Healthcare is certainly one of those areas,” said Page. “Maybe we should shoot for the things that are really, really important so ten or 20 years from now we have those things done.”
Google is keeping its exact plans close to the vest. But it is likely to use its data-processing might to shed new light on age-related maladies. Sources close to the project suggest Calico will start with a small number of employees and focus initially on researching new technology.
That approach may yield unlikely conclusions. “Are people really focused on the right things? One of the things I thought was amazing is that if you solve cancer, you’d add about three years to people’s average life expectancy,” Page said. “We think of solving cancer as this huge thing that’ll totally change the world. But when you really take a step back and look at it, yeah, there are many, many tragic cases of cancer, and it’s very, very sad, but in the aggregate, it’s not as big an advance as you might think.”
Researchers Michael Anderson from the University of Hartford and Susan Leigh Anderson from the University of Connecticut have developed an approach to computing ethics that entails the discovery of ethical principles through machine learning and the incorporation of these principles into a system’s decision procedure. They’ve programmed their system into the robot NAO, manufactured by Aldebaran Roboticstarget blank image.. It is the first robot to have been programmed with an ethical principle.
Sherry Turkle, a professor of science, technology and society at the Massachusetts Institute of Technology and author of the book “Alone Together: Why We Expect More From Technology and Less From Each Other,” did a series of studies with Paro, a therapeutic robot that looks like a baby harp seal and is meant to have a calming effect on patients with dementia, Alzheimer’s and in health care facilities. The professor said she was troubled when she saw a 76-year-old woman share stories about her life with the robot.
“I felt like this isn’t amazing; this is sad. We have been reduced to spectators of a conversation that has no meaning,” she said. “Giving old people robots to talk to is a dystopian view that is being classified as utopian.” Professor Turkle said robots did not have a capacity to listen or understand something personal, and tricking patients to think they can is unethical.
“We are social beings, and we do develop social types of relationships with lots of things,” she said. “Think about the GPS in your car, you talk to it and it talks to you.” Dr. Rogers noted that people developed connections with their Roomba, the vacuum robot, by giving the machines names and buying costumes for them. “This isn’t a bad thing, it’s just what we do,” she said.
As the actor Frank Langella, who plays Frank in the movie, told NPR last year: “Every one of us is going to go through aging and all sorts of processes, many people suffering from dementia,” he said. “And if you put a machine in there to help, the notion of making it about love and buddy-ness and warmth is kind of scary in a way, because that’s what you should be doing with other human beings.”
The da Vinci surgical robot (or, more accurately, its maker) was acquitted on Friday in the case of a man who died in 2012 after a botched robotic surgery four years earlier. The jury voted 10-2 in favor of Intuitive, the maker of the da Vinci, but you can rest assured this won’t be the last legal battle for robot-assisted medicine.
Nearly 200,000 PatientsLikeMe members have created and shared their own medical records, often using standardized questionnaires or tests they conducted themselves. The new platform will include tools for developing standardized measurements for additional diseases, tools to evaluate and refine those measurements, and mechanisms for licensing the data and for open-sourcing the measurements used to collect the data under a Creative Commons license.
The plan, announced at the TED Conference Monday, is to rapidly accelerate the spread of medical data now hoarded by private companies, locked down by privacy laws, and collected using often proprietary and commercially licensed measurement systems.