Category Archives: T – politic

Using Twitter’s Sentiment Analysis for Predicting Stock Market

There are two central drivers of stock price demand—fundamentals (sales, revenues, profits, etc.) and how investors feel about fundamentals (sentiment). Sentiment tends to erratically drive short-term pricing, while the longer cycles move on fundamentals. If you talk to a buy-and-hold investor, like Warren Buffett, he will tell you short-term investing (day trading, for example) is a fool’s game—there is no predicting sentiment.

But Derwent Capital Management (DCM) thinks that may have been true, once, in ancient times before information technology enabled social networks. But now there is a wealth of hard data on real time sentiment. All one must do is set up an algorithm to mine it, process it, put it on a scale—in this case from 0 to 100—and sell it to retail investors.

And that’s exactly what they’ve done.

Wondering what your favorite stock (or currency pair or commodity) is about to do? You need merely check the the DCM trading platform’s Twitter indicator.

 

Ref: Can Twitter Tell You When to Buy and When to Sell? – The SingularityHub

Algorithms and Bias

Could software agents/bots have bias?

This question is addressed by Nick Diakopoulos in his article ‘Understanding bias in computational news media‘. Even if the article focus on algorithms related with news (ie: Google News), it is interesting to ask this question for any kind of algorithms. Could algorithms have their own politic?

Even robots have biases.

Any decision process, whether human or algorithm, about what to include, exclude, or emphasize — processes of which Google News has many — has the potential to introduce bias. What’s interesting in terms of algorithms though is that the decision criteria available to the algorithm may appear innocuous while at the same time resulting in output that is perceived as biased.

Algorithms may lack the semantics for understanding higher-order concepts like stereotypes or racism — but if, for instance, the simple and measurable criteria they use to exclude information from visibility somehow do correlate with race divides, they might appear to have a racial bias. […] In a story about the Israeli-Palestinian conflict, say, is it possible their algorithm might disproportionately select sentences that serve to emphasize one side over the other?

 

 

For exemple, could we say that the politic of the RiceMaker algorithm – that automates the vocabulary game on FreeRice to generate rice donations – has a ‘left-wing’ political orientation?

 

Ref: Understanding bias in computational news media – Nieman Journalism Lab
Ref: RiceMaker – via #algopop

A New Approach to Decision-Making

How to make sense of Philadelphia’s City Council district map?

Even with the best of intentions, districting problems can be difficult to solve because they are so complex, says Kimbrough, who specializes in computational intelligence. The key to finding the best solution, he suggests, is to start with not one but many good solutions, and let decision makers tweak plans from there.

The team created a genetic algorithms that mimics evolution and natural selection of the various districts and proposes endless solutions/variations from just a few good beginnings.

The team selected then 116 of these variations which were very good. There is now material for human decision-makers to take out a decision out of these various decisions took by an algorithm.

This is an interesting example where humans and algorithms are working together to solve a problem.

“In the end, there are a lot of human judgments that go on here,” notes Murphy. “What reallyis that neighborhood? Can you split the wards?…. Generating one solution is not a good idea because there are all these side issues that you can’t represent mathematically. This always happens, whether in political districting or in commercial applications.”

 

Ref: A New Approach to Decision Making: When 116 Solutions Are Better Than One – Knowledge Wharton

Data Cruncher Who Helped Obama Win

The analytics team used four streams of polling data to build a detailed picture of voters in key states. In the past month, said one official, the analytics team had polling data from about 29,000 people in Ohio alone — a whopping sample that composed nearly half of 1% of all voters there — allowing for deep dives into exactly where each demographic and regional group was trending at any given moment. This was a huge advantage: when polls started to slip after the first debate, they could check to see which voters were changing sides and which were not.

It was this database that helped steady campaign aides in October’s choppy waters, assuring them that most of the Ohioans in motion were not Obama backers but likely Romney supporters whom Romney had lost because of his September blunders. “We were much calmer than others,” said one of the officials. The polling and voter-contact data were processed and reprocessed nightly to account for every imaginable scenario. “We ran the election 66,000 times every night,” said a senior official, describing the computer simulations the campaign ran to figure out Obama’s odds of winning each swing state. “And every morning we got the spit-out — here are your chances of winning these states. And that is how we allocated resources.”

 

Team Romney has therefore devised a clever app that finds which friends are most likely to be influential on Election Day, given their geography and history of Facebook political activity.

 

 

Ref: Inside the Secret World of the Data Cruncher Who Helped Obama Win – The Time
Ref: Wrath of the Math: Obama Wins Nerdiest Election Ever – Wired
Ref: Romney’s New Facebook App Knows Which Friends Are Most Influential – TechCrunch
Ref: Google, Facebook And Twitter Want You To Use Their Election 2012 Web Tools – TPM