Category Archives: T – tech

Rapyuta: The RoboEarth Cloud Engine

 

European scientists have turned on the first part of a web-based database of information to help them cope.

Called Rapyuta, the online “brain” describes objects robots have met and can also carry out complicated computation on behalf of a robot.

Rapyuta’s creators hope it will make robots cheaper as they will not need all their processing power on-board.

 

Ref: Web Database for Robots Comes Online – DarkGovernment
Ref: Rapyuta

Decision Support System Optimizer by IBM

New computer technology that can predict traffic patterns could help ease congestion on busy roads, tech giant IBM announced November 14.

The company is wrapping up research on its new “predictive traffic management technology,” developed and tested in Lyons, France.

The technology – IBM calls it a Decision Support System Optimizer (DSSO) – uses real-time traffic data to predict the best way to keep cars moving when a gridlock-sparking incident happens, reports Mashable.

In fact, DSSO can even help cities “anticipate and avoid many traffic jams before they happen, and lessen their impact on citizens,” Lyons mayor Gerard Collomb said in a statement.

While modern transportation centres often have incident response plans and techniques they can employ to prevent bottle-necking, they’re unable to factor past and future traffic patterns into their actions, IBM points out.

Not only does the DSSO’s algorithms use historical traffic data to predict future patterns, it also “learns” from the implementation of successful traffic management plans.

 

Ref: IBM Software Predicts Traffic Jams, Stops Them – Sympatico

RecordedFuture

 

We continually scan tens of thousands of high-quality, online news publications, blogs, public niche sources, trade publications, government web sites, financial databases and more.
From these open web sites, we identify references to entities and events. These are organized in time by extracting publication date and any temporal expressions in the text. Each reference is linked to the original source and measured for online momentum and tone of language: positive or negative sentiment.

 

Ref: RecordedFuture

Ayasdi

 

Their new product is called the Iris Insight Discovery platform. It’s a type of machine learning that uses hundreds of algorithms and topological data analysis to mine huge datasets before presenting the results in a visually accessible way. Using algebraic topology, the system automatically hunts down data points close in nature and maps these out to reveal a network of patterns for a researcher to decipher — any closely related nodes of information will be connected and clustered together, like how a social network arranges its data according to relationship connections.

 

 

Ref: Data-Visualization Firm’s New Software Autonomously Finds Abstract Connections – Wired
Ref: Ayasdi

MindMeld – Anticipatory Computing

 

We call this platform our ‘Anticipatory Computing Engine’, and it has three unique capabilities designed to facilitate conversational interactions:

  1. Real-Time, Multi-Party Conversation Analysis: Our platform is designed to analyze and understand multiple concurrent streams of conversational dialogue in real-time. It continuously analyses audio signals and attempts to understand their underlying meaning. Based on this understanding, it not only attempts to identify key concepts and topics related to your conversation, but it also uses language structure and analysis to infer what types of information you may find most useful.
  2. Continuous, Predictive Modeling: Our platform observes conversations over time and generates a model to represent the meaning of each conversation. This model changes from second-to-second as the conversation evolves. This model is then extrapolated to predict the topics, concepts and related information that may be relevant in the future. In essence, this platform analyzes and understands the past ten minutes of a conversation in order to predict what may be relevant in the next ten seconds.
  3. Proactive Information Discovery: Our platform does not wait for a user to explicitly ask for information. Instead, it uses its underlying predictive model to identify information that is most likely to be relevant at every point in time. It then proactively finds and retrieves this information – from across the web or from a user’s social graph – and delivers this information to the user, in some cases before they even request it.

 

Ref: ExpectLab
Ref: Smart Assistant Listens to You Talk, Fetches Info Automatically – MIT Technology Review

How Artificial Intelligence Sees Us

 

Right now, there is a neural network of 1,000 computers at Google’s X lab that has taught itself to recognize humans and cats on the internet. But the network has also learned to recognize some weirder things, too. What can this machine’s unprecedented new capability teach us about what future artificial intelligences might actually be like

 

This is, to me, the most interesting part of the research. What are the patterns in human existence that jump out to non-human intelligences? Certainly 10 million videos from YouTube do not comprise the whole of human existence, but it is a pretty good start. They reveal a lot of things about us we might not have realized, like a propensity to orient tools at 30 degrees. Why does this matter, you ask? It doesn’t matter to you, because you’re human. But it matters to XNet.

What else will matter to XNet? Will it really discern a meaningful difference between cats and humans? What about the difference between a tool and a human body? This kind of question is a major concern for University of Oxford philosopher Nick Bostrom, who has written about the need to program AIs so that they don’t display a “lethal indifference” to humanity. In other words, he’s not as worried about a Skynet scenario where the AIs want to crush humans — he’s worried that AIs won’t recognize humans as being any more interesting than, say, a spatula. This becomes a problem if, as MIT roboticist Cynthia Breazeal has speculated, human-equivalent machine minds won’t emerge until we put them into robot bodies. What if XNet exists in a thousand robots, and they all decide for some weird reason that humans should stand completely still at 30 degree angles? That’s some lethal indifference right there.

 

Ref: How artificial intelligences will see us – io9
Ref: Building High-level Features Using Large Scale Unsupervised Learning – Google Research