Category Archives: W – now

MACHINES TEACH HUMANS HOW TO FEEL USING NEUROFEEDBACK

 

Yet, some people, often as the result of traumatic experiences or neglect, don’t experience these fundamental social feelings normally. Could a machine teach them these quintessentially human responses? A thought-provoking Brazilian study recently published in PLoS One suggests it could.

Researchers at the D’Or Institute for Research and Education outside Rio de Janeiro, Brazil, performed functional MRI scans on healthy young adults while asking them to focus on past experience that epitomized feelings of non-sexual affection or pride of accomplishment. They set up a basic form of artificial intelligence to categorize, in real time, the fMRI readings as affection, pride or neither. They then showed the experiment group a graphic form of biofeedback to tell them whether their brain results were fully manifesting that feeling; the control group saw the meaningless graphics.

The results demonstrated that the machine-learning algorithms were able to detect complex emotions that stem from neurons in various parts of the cortex and sub-cortex, and the participants were able to hone their feelings based on the feedback, learning on command to light up all of those brain regions.

[…]

Here we must pause to note that the experiment’s artificial intelligence system’s likeness to the “empathy box” in “Blade Runner” and the Philip K. Dick story on which it’s based did not escape the researchers. Yes, the system could potentially be used to subject a person’s inner feelings to interrogation by intrusive government bodies, which is really about as creepy as it gets. It could, to cite that other dystopian science fiction blockbuster, “Minority Report,” identify criminal tendencies and condemn people even before they commit crimes.

 

Ref: MACHINES TEACH HUMANS HOW TO FEEL USING NEUROFEEDBACK – SingularityHub

Algorithm Hunts Rare Genetic Disorders from Facial Features in Photos

 

Even before birth, concerned parents often fret over the possibility that their children may have underlying medical issues. Chief among these worries are rare genetic conditions that can drastically shape the course and reduce the quality of their lives. While progress is being made in genetic testing, diagnosis of many conditions occurs only after symptoms manifest, usually to the shock of the family.

A new algorithm, however, is attempting to identify specific syndromes much sooner by screening photos for characteristic facial features associated with specific genetic conditions, such as Down’s syndrome, Progeria, and Fragile X syndrome.

[…]

Nellåker added, “A doctor should in future, anywhere in the world, be able to take a smartphone picture of a patient and run the computer analysis to quickly find out which genetic disorder the person might have.”

 

Ref: ALGORITHM HUNTS RARE GENETIC DISORDERS FROM FACIAL FEATURES IN PHOTOS – SingularityHub

Unfair Advantages of Emotional Computing

Pepper is intended to babysit your kids and work the registers at retail stores. What’s really remarkable is that Pepper is designed to understand and respond to human emotion.

Heck, understanding human emotion is tough enough for most HUMANS.

There is a new field of “affect computing” coming your way that will give entrepreneurs and marketers a real unfair advantage. That’s what this note to you is about… It’s really very powerful, and something I’m thinking a lot about

Recent advances in the field of emotion tracking are about to give businesses an enormous unfair advantage.

Take Beyond Verbal, a start-up in Tel Aviv, for example. They’ve developed software that can detect 400 different variations of human “moods.” They are now integrating this software into call centers that can help a sales assistant understand and react to customer’s emotions in real time.

Better than that, the software itself can also pinpoint and influence how consumers make decisions.

 

Ref: UNFAIR ADVANTAGES OF EMOTIONAL COMPUTING – SingularityHub

Facebook’s Massive-Scale Emotional Contagion Experiment

Facebook researchers have published a paper documenting a huge social experiment carried out on 689,003 users without their knowledge. The experiment was to prove that emotional states can be transferred to others via emotional contagion. They proved this by manipulating different user’s newsfeed to be more positive or more negative and then measuring the emotional state of the user afterwards by analysing their subsequent status updates.

we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred.

They demonstrate how influential the newsfeed algorithm can be in manipulating a person’s mood, and even test tweaking the algorithm to deliver more emotional content with hope that it would be more engaging.

 

Ref: Facebook’s massive-scale emotional contagion experiment – Algopop

Robots Venture Capital A Venture Capital Firm Just Named An Algorithm To Its Board Of Directors

A Hong Kong VC fund has just appointed an algorithm to its board.

Deep Knowledge Ventures, a firm that focuses on age-related disease drugs and regenerative medicine projects, says the program, called VITAL, can make investment recommendations about life sciences firms by poring over large amounts of data.

Just like other members of the board, the algorithm gets to vote on whether the firm makes an investment in a specific company or not. The program will be the sixth member of DKV’s board.

 

Ref: A Venture Capital Firm Just Named An Algorithm To Its Board Of Directors – BusinessInsider

In Hiring, Algorithms Beat Instinct

You know your company inside out. You know the requirements of the position you need to fill. And now that HR has finished its interviews and simulations, you know the applicants, too—maybe even better than their friends do. Your wise and experienced brain is ready to synthesize the data and choose the best candidate for the job.

Instead, you should step back from the process. If you simply crunch the applicants’ data and apply the resulting analysis to the job criteria, you’ll probably end up with a better hire.

Humans are very good at specifying what’s needed for a position and eliciting information from candidates—but they’re very bad at weighing the results. Our analysis of 17 studies of applicant evaluations shows that a simple equation outperforms human decisions by at least 25%. The effect holds in any situation with a large number of candidates, regardless of whether the job is on the front line, in middle management, or (yes) in the C-suite.

 

Ref: In Hiring, Algorithms Beat Instinct – Harvard Business Review

Human-level Performance in Face Verification is Surpassed by Algorithm

 

Face verification remains a challenging problem in very complex conditions with large variations such as pose, illumination, expression, and occlusions. This problem is exacerbated when we rely unrealistically on a single training data source, which is often insufficient to cover the intrinsically complex face variations. This paper proposes a principled multi-task learning approach based on Discriminative Gaussian Process Latent Variable Model, named GaussianFace, to enrich the diversity of training data. In comparison to existing methods, our model exploits additional data from multiple source-domains to improve the generalization performance of face verification in an unknown target-domain. Importantly, our model can adapt automatically to complex data distributions, and therefore can well capture complex face variations inherent in multiple sources. Extensive experiments demonstrate the effectiveness of the proposed model in learning from diverse data sources and generalize to unseen domain. Specifically, the accuracy of our algorithm achieves an impressive accuracy rate of 98.52% on the well-known and challenging Labeled Faces in the Wild (LFW) benchmark. For the first time, the human-level performance in face verification (97.53%) on LFW is surpassed.

 

Ref: Surpassing Human-Level Face Verification Performance on LFW with GaussianFace – Cornell University Lab

Watson to Battle Brain Cancer

IBM will team up with the New York Genome Center to see whether Watson can use glioblastoma patients’ genetic information to prescribe custom-tailored treatments. This “personalized” approach to medicine has been hailed as the next big step in healthcare, but making sense of genetic data – winnowing useful information from impertinent nucleic chaff – can be an overwhelming task. That’s where Watson comes in.

[New York Genome Center CEO Rober Darnell] said that the project would start with 20 to 25 patients… Samples from those patients (including both healthy and cancerous tissue) would be subjected to extensive DNA sequencing, including both the genome and the RNA transcribed from it. “What comes out is an absolute gusher of information,” he said.

It should theoretically be possible to analyze that data and use it to customize a treatment that targets the specific mutations present in tumor cells. But right now, doing so requires a squad of highly trained geneticists, genomics experts, and clinicians. It’s a situation that Darnell said simply can’t scale to handle the patients with glioblastoma, much less other cancers.

Instead, that gusher of information is going to be pointed at Watson. John Kelly of IBM Research stepped up to describe Watson as a “cognitive system,” one that “mimics the capabilities of the human mind—some, but not all [capabilities].” The capabilities it does have include ingesting large volumes of information, identifying the information that’s relevant, and then learning from the results of its use. Kelley was extremely optimistic that Watson could bring new insights to cancer care. “We will have an impact on cancer and these other horrific diseases,” he told the audience. “It’s not a matter of if, it’s a matter of when—and the when is going to be very soon.”

Ref: IBM Will Use Watson To Battle Brain Cancer – io9

Role of Killer Robots

According to Heyns’s 2013 report, South Korea operates “surveillance and security guard robots” in the demilitarized zone that buffers it from North Korea. Although there is an automatic mode available on the Samsung machines, soldiers control them remotely.

The U.S. and Germany possess robots that automatically target and destroy incoming mortar fire. They can also likely locate the source of the mortar fire, according to Noel Sharkey, a University of Sheffield roboticist who is active in the “Stop Killer Robots” campaign.

And of course there are drones. While many get their orders directly from a human operator, unmanned aircraft operated by Israel, the U.K. and the U.S. are capable of tracking and firing on aircraft and missiles. On some of its Navy cruisers, the U.S. also operates Phalanx, a stationary system that can track and engage anti-ship missiles and aircraft.

The Army is testing a gun-mounted ground vehicle, MAARS, that can fire on targets autonomously. One tiny drone, the Raven is primarily a surveillance vehicle but among its capabilities is “target acquisition.”

No one knows for sure what other technologies may be in development.

“Transparency when it comes to any kind of weapons system is generally very low, so it’s hard to know what governments really possess,” Michael Spies, a political affairs officer in the U.N.’s Office for Disarmament Affairs, told Singularity Hub.

At least publicly, the world’s military powers seem now to agree that robots should not be permitted to kill autonomously. That is among the criteria laid out in a November 2012 U.S. military directive that guides the development of autonomous weapons. The European Parliament recently established a non-binding ban for member states on using or developing robots that can kill without human participation.

Yet, even robots not specifically designed to make kill decisions could do so if they malfunctioned, or if their user experience made it easier to accept than reject automated targeting.

“The technology’s not fit for purpose as it stands, but as a computer scientist there are other things that bother me. I mean, how reliable is a computer system?” Sharkey, of Stop Killer Robots, said.

Sharkey noted that warrior robots would do battle with other warrior robots equipped with algorithms designed by an enemy army.

“If you have two competing algorithms and you don’t know the contents of the other person’s algorithm, you don’t know the outcome. Anything could happen,” he said.

For instance, when two sellers recently unknowingly competed for business on Amazon, the interactions of their two algorithms resulted in prices in the millions of dollars. Competing robot armies could destroy cities as their algorithms exponentially escalated, Sharkey said.

An even likelier outcome would be that human enemies would target the weaknesses of the robots’ algorithms to produce undesirable outcomes. For instance, say a machine that’s designed to destroy incoming mortar fire such as the U.S.’s C-RAM or Germany’sMANTIS, is also tasked with destroying the launcher. A terrorist group could place a launcher in a crowded urban area, where its neutralization would cause civilian casualties.

 

Ref: CONTROVERSY BREWS OVER ROLE OF ‘KILLER ROBOTS’ IN THEATER OF WAR – SingularityHub