Category Archives: W – now

Google Flu Trends Algorithm was Wrong

Google Flu Trends has continued to perform remarkably well, and researchers in many countries have confirmed that its ILI estimates are accurate. But the latest US flu season seems to have confounded its algorithms. Its estimate for the Christmas national peak of flu is almost double the CDC’s, and some of its state data show even larger discrepancies.

It is not the first time that a flu season has tripped Google up. In 2009, Flu Trends had to tweak its algorithms after its models badly underestimated ILI in the United States at the start of the H1N1 (swine flu) pandemic — a glitch attributed to changes in people’s search behaviour as a result of the exceptional nature of the pandemic (S. Cook et al. PLoS ONE 6, e23610; 2011).

Google would not comment on thisyear’s difficulties. But several researchers suggest that the problems may be due to widespread media coverage of this year’s severe US flu season, including the declaration of a public-health emergency by New York state last month. The press reports may have triggered many flu-related searches by people who were not ill. Few doubt that Google Flu will bounce back after its models are refined, however.

 

Ref: When Google got flu wrong – Nature

Lies Detector U.S. Border

 

Since September 11, 2001, federal agencies have spent millions of dollars on research designed to detect deceptive behavior in travelers passing through US airports and border crossings in the hope of catching terrorists. Security personnel have been trained—and technology has been devised—to identify, as an air transport trade association representative once put it, “bad people and not just bad objects.” Yet for all this investment and the decades of research that preceded it, researchers continue to struggle with a profound scientific question: How can you tell if someone is lying?

That problem is so complex that no one, including the engineers and psychologists developing machines to do it, can be certain if any technology will work. “It fits with our notion of justice, somehow, that liars can’t really get away with it,” says Maria Hartwig, a social psychologist at John Jay College of Criminal Justice who cowrote a recent report on deceit detection at airports and border crossings. The problem is, as Hartwig explains it, that all the science says people are really good at lying, and it’s incredibly hard to tell when we’re doing it.

 
 

Ref: Deception Is Futile When Big Brother’s Lie Detector Turns Its Eyes on You – Wired

U.S. Cities Relying on Precog Software to Predict Murder

 

New crime-prediction software used in Maryland and Pennsylvania, and soon to be rolled out in the nation’s capital too, promises to reduce the homicide rate by predicting which prison parolees are likely to commit murder and therefore receive more stringent supervision.

The software aims to replace the judgments parole officers already make based on a parolee’s criminal record and is currently being used in Baltimore and Philadelphia.

Richard Berk, a criminologist at the University of Pennsylvania who developed the algorithm, claims it will reduce the murder rate and other crimes and could help courts set bail amounts as well as sentencing in the future.

“When a person goes on probation or parole they are supervised by an officer. The question that officer has to answer is ‘what level of supervision do you provide?’” Berk told ABC News. The software simply replaces that kind of ad hoc decision-making that officers already do, he says.

To create the software, researchers assembled a dataset of more than 60,000 crimes, including homicides, then wrote an algorithm to find the people behind the crimes who were more likely to commit murder when paroled or put on probation. Berk claims the software could identify eight future murderers out of 100.

The software parses about two dozen variables, including criminal record and geographic location. The type of crime and the age at which it was committed, however, turned out to be two of the most predictive variables.

Shawn Bushway, a professor of criminal justice at the State University of New York at Albany told ABC that advocates for inmate rights might view the use of an algorithm to increase supervision of a parolee as a form of harassment, especially when the software produced the inevitable false positives. He said it could result in “punishing people who, most likely, will not commit a crime in the future.”

 

Ref: U.S. Cities Relying on Precog Software to Predict Murder – Wired

Human Intelligence is Declining According To Stanford Geneticist

 

Dr. Gerald Crabtree, a geneticist at Stanford, has published a study that he conducted to try and identify the progression of modern man’s intelligence. As it turns out, however, Dr. Crabtree’s research led him to believe that the collective mind of mankind has been on more or a less a downhill trajectory for quite some time.

According to his research, published in two parts starting with last year’s ‘Our fragile intellect. Part I,’ Dr. Crabtree thinks unavoidable changes in the genetic make-up coupled with modern technological advances has left humans, well, kind of stupid. He has recently published his follow-up analysis, and in it explains that of the roughly 5,000 genes he considered the basis for human intelligence, a number of mutations over the years has forced modern man to be only a portion as bright as his ancestors.

“New developments in genetics, anthropology and neurobiology predict that a very large number of genes underlie our intellectual and emotional abilities, making these abilities genetically surprisingly fragile,” he writes in part one of his research. “Analysis of human mutation rates and the number of genes required for human intellectual and emotional fitness indicates that we are almost certainly losing these abilities,” he adds in his latest report.

From there, the doctor goes on to explain that general mutations over the last few thousand years have left mankind increasingly unable to cope with certain situations that perhaps our ancestors would be more adapted to.

“I would wager that if an average citizen from Athens of 1000 BC were to appear suddenly among us, he or she would be among the brightest and most intellectually alive of our colleagues and companions, with a good memory, a broad range of ideas, and a clear-sighted view of important issues. Furthermore, I would guess that he or she would be among the most emotionally stable of our friends and colleagues. I would also make this wager for the ancient inhabitants of Africa, Asia, India or the Americas, of perhaps 2000–6000 years ago. The basis for my wager comes from new developments in genetics, anthropology, and neurobiology that make a clear prediction that our intellectual and emotional abilities are genetically surprisingly fragile.”

According to the doctor, humans were at their most intelligent when “every individual was exposed to nature’s raw selective mechanisms on a daily basis.” Under those conditions, adaption, he argued, was much more of a matter than fight or flight. Rather, says the scientists, it was a sink or swim situation for generations upon generations.

“We, as a species, are surprisingly intellectually fragile and perhaps reached a peak 2,000 to 6,000 years ago,” he writes. “If selection is only slightly relaxed, one would still conclude that nearly all of us are compromised compared to our ancient ancestors of 3,000 to 6,000 years ago.”

 

Ref: Our Fragile Intellect – Stanford
Ref: Human intelligence is declining according to Stanford geneticist – RT

 

Decision Support System Optimizer by IBM

New computer technology that can predict traffic patterns could help ease congestion on busy roads, tech giant IBM announced November 14.

The company is wrapping up research on its new “predictive traffic management technology,” developed and tested in Lyons, France.

The technology – IBM calls it a Decision Support System Optimizer (DSSO) – uses real-time traffic data to predict the best way to keep cars moving when a gridlock-sparking incident happens, reports Mashable.

In fact, DSSO can even help cities “anticipate and avoid many traffic jams before they happen, and lessen their impact on citizens,” Lyons mayor Gerard Collomb said in a statement.

While modern transportation centres often have incident response plans and techniques they can employ to prevent bottle-necking, they’re unable to factor past and future traffic patterns into their actions, IBM points out.

Not only does the DSSO’s algorithms use historical traffic data to predict future patterns, it also “learns” from the implementation of successful traffic management plans.

 

Ref: IBM Software Predicts Traffic Jams, Stops Them – Sympatico

RecordedFuture

 

We continually scan tens of thousands of high-quality, online news publications, blogs, public niche sources, trade publications, government web sites, financial databases and more.
From these open web sites, we identify references to entities and events. These are organized in time by extracting publication date and any temporal expressions in the text. Each reference is linked to the original source and measured for online momentum and tone of language: positive or negative sentiment.

 

Ref: RecordedFuture

Did Google Earth Error Send Murderer to Wrong Address?

 

Sometimes, even after a murder conviction, some see reasonable doubt that the conviction was a righteous one.

Such is the case in the murder of Dennis and Merna Koula in La Cross, Wisc, a quiet community.

[…]

A neighbor of the Koula’s, Steve Burgess, freely admitted that he had received death threats. He was the president of a local bank.

And, as the CBS News investigation indicated (embedded, but there are some gaps in the audio), if you use Google Earth to locate Burgess’ house, you get a surprise.

“48 Hours” correspondent Peter Van Sant said: “In fact, when you Google Earth Steve Burgess’ address…the zoom into the house goes to the Koula’s house, not to Steve Burgess’ house.”

Police say they discounted the threatening caller, as they located him and he had an alibi. But then could that individual have hired someone to do any allegedly required dirty work, a person who used Google Earth to go to the wrong house?

 

Ref: Did Google Earth error send murderer to wrong address? – CNET

 

Ayasdi

 

Their new product is called the Iris Insight Discovery platform. It’s a type of machine learning that uses hundreds of algorithms and topological data analysis to mine huge datasets before presenting the results in a visually accessible way. Using algebraic topology, the system automatically hunts down data points close in nature and maps these out to reveal a network of patterns for a researcher to decipher — any closely related nodes of information will be connected and clustered together, like how a social network arranges its data according to relationship connections.

 

 

Ref: Data-Visualization Firm’s New Software Autonomously Finds Abstract Connections – Wired
Ref: Ayasdi

Google Can Identify Which of its 20,000 Employees are Most Likely to Quit

The Internet search giant recently began crunching data from employee reviews and promotion and pay histories in a mathematical formula Google says can identify which of its 20,000 employees are most likely to quit.

The move is one of a series Google has made to prevent its most promising engineers, designers and sales executives from leaving at a time when its once-powerful draws — a start-up atmosphere and soaring stock price — have been diluted by its growing size. The data crunching supplements more traditional measures like employee training and leadership meetings to evaluate talent.

Google’s algorithm helps the company “get inside people’s heads even before they know they might leave,” said Laszlo Bock, who runs human resources for the company.

 

Ref: Google Searches for Staffing Answers – The Wall Street Journal

MindMeld – Anticipatory Computing

 

We call this platform our ‘Anticipatory Computing Engine’, and it has three unique capabilities designed to facilitate conversational interactions:

  1. Real-Time, Multi-Party Conversation Analysis: Our platform is designed to analyze and understand multiple concurrent streams of conversational dialogue in real-time. It continuously analyses audio signals and attempts to understand their underlying meaning. Based on this understanding, it not only attempts to identify key concepts and topics related to your conversation, but it also uses language structure and analysis to infer what types of information you may find most useful.
  2. Continuous, Predictive Modeling: Our platform observes conversations over time and generates a model to represent the meaning of each conversation. This model changes from second-to-second as the conversation evolves. This model is then extrapolated to predict the topics, concepts and related information that may be relevant in the future. In essence, this platform analyzes and understands the past ten minutes of a conversation in order to predict what may be relevant in the next ten seconds.
  3. Proactive Information Discovery: Our platform does not wait for a user to explicitly ask for information. Instead, it uses its underlying predictive model to identify information that is most likely to be relevant at every point in time. It then proactively finds and retrieves this information – from across the web or from a user’s social graph – and delivers this information to the user, in some cases before they even request it.

 

Ref: ExpectLab
Ref: Smart Assistant Listens to You Talk, Fetches Info Automatically – MIT Technology Review