Predicting the Future with Data from the Past


It’s not the mathematics. Turchin says his methods aren’t very complex. He’s using common statistical techniques like spectrum analysis — “I used much more sophisticated statistical methods in ecology,” he says. And it’s not “big data” tools. The data sets he’s using aren’t all that big. He can analyze them using ordinary statistical software. But he couldn’t have built these models even a few decades ago because historians and archivists have only recently started digitizing newspapers and public records from throughout history and putting them online. That gives cliodynamics the opportunity to quantify what has happened in the past — and make predictions based on that data.


What Turchin and his colleagues have found is a pattern of social instability. It applies to all agrarian states for which records are available, including Ancient Rome, Dynastic China, Medieval England, France, Russia, and, yes, the United States. Basically, the data shows 100 year waves of instability, and superimposed on each wave — which Turchin calls the “Secular Cycle” — there’s typically an additional 50-year cycle of widespread political violence. The 50-year cycles aren’t universal — they don’t appear in China, for instance. But they do appear in the United States.


Turchin takes pains to emphasize that the cycles are not the result of iron-clad rules of history, but of feedback loops — just like in ecology. “In a predator-prey cycle, such as mice and weasels or hares and lynx, the reason why populations go through periodic booms and busts has nothing to do with any external clocks,” he writes. “As mice become abundant, weasels breed like crazy and multiply. Then they eat down most of the mice and starve to death themselves, at which point the few surviving mice begin breeding like crazy and the cycle repeats.”


Ref: Mathematicians Predict the Future With Data From the Past – Wired

Hallucinating Humans for Robotic Scenes Understanding


Roboticist Ashutosh Saxena and his colleagues at Cornell University’s Personal Robotics Lab reasoned that people are likely the most important factors for robots to keep mind in the places where they work, since those environments are typically designed around human use. As such, when people are not actually there for reference, hallucinating their presence could help provide key context for the machines.


Ref: Robots can learn by imagining the existence of people – io9

UK Debating Killer Robots


Representatives from both sides of the House of Commons in the United Kingdom agree that fully autonomous weapons raise numerous concerns warranting further deliberation, including at the international level. The Parliamentary Under Secretary of State at the Foreign and Commonwealth Office, Alistair Burt, however emphasized that the government does not support the call for a moratorium on these future weapons that would select and attack targets without further human intervention, described as “lethal autonomous robotics” in the parliamentary adjournment debate held late in the evening of 17 June 2013.

Burt said the statement that ’robots may never be able to meet the requirements of international humanitarian law’ is “absolutely correct; they will not. We cannot develop systems that would breach international humanitarian law, which is why we are not engaged in the development of such systems and why we believe that the existing systems of international law should prevent their development.” He emphasized that as a matter of policy, “Her Majesty’s Government are clear that the operation of our weapons will always be under human control as an absolute guarantee of human oversight and authority and of accountability for weapons usage.”


Ref: United Kingdom debating killer robots – Campaign to Stop Killer Robots

New Google Maps

Last February, in an interview with the technology blog TechCrunch, a senior Google executive expressed a rather philosophical—even postmodernist—view on the future of maps. “If you look at a map and if I look at a map, should it always be the same for you and me? I’m not sure about that, because I go to different places than you do,” said Daniel Graf, director of Google Maps for mobile.


There’s something profoundly conservative about Google’s logic. As long as advertising is the mainstay of its business, the company is not really interested in systematically introducing radical novelty into our lives. To succeed with advertisers, it needs to convince them that its view of us customers is accurate and that it can generate predictions about where we are likely to go (or, for that matter, what we are likely to click). The best way to do that is to actually turn us into highly predictable creatures by artificially limiting our choices. Another way is to nudge us to go to places frequented by other people like us—like our Google Plus friends. In short, Google prefers a world where we consistently go to three restaurants to a world where our choices are impossible to predict.


Judging by the changes it seeks to make to maps, Google’s foray into the public space more broadly could have drastic implications. After all, it’s not just maps: Google’s self-driving cars and smart glasses will profoundly affect how we experience the world outside.


The problem with Google’s vision is that it doesn’t acknowledge the vital role that disorder, chaos, and novelty play in shaping the urban experience. Back in 1970, cultural critic Richard Sennett wrote a wonderful little book—The Users of Disorder—that all Google engineers should read. In it, Sennett made a strong case for “dense, disorderly, overwhelming cities,” where strangers from very different socio-economic backgrounds still rub shoulders. Sennett’s ideal city is not just an agglomeration of ghettos and gated communities whose residents never talk to one another; rather, it’s the mutual entanglement between the two—and the occasionally mess that such entanglements introduce into our daily life—that makes it an interesting place to live in and allows its inhabitants to turn into mature and complex human beings.

Google’s urbanism, on the other hand, is that of someone who is trying to get to a shopping mall in their self-driving car. It’s profoundly utilitarian, even selfish in character, with little to no concern for how public space is experienced. In Google’s world, public space is just something that stands between your house and the well-reviewed restaurant that you are dying to get to. Since no one formally reviews public space or mentions it in their emails, it might as well disappear from Google’s highly personalized maps. And if the promotional videos for Google Glass are anything to judge by, we might not even notice it’s gone: For all we know, we might be walking through an urban desert, but Google Glass will still make it look exciting, masking the blighted reality.

The Programmable World – Google Being


“You are with my Google Being. I’m not physically here, but I am present. Unified logins let us get to know our audience in ways we never could before. They gave us their locations so that we might better tell them if it was raining outside. They told us where they lived and where they wanted to go so that we could deliver a more immersive map that better anticipated what they wanted to do–it let us very literally tell people what they should do today. As people began to see how very useful Google Now was, they began to give us even more information. They told us to dig through their e-mail for their boarding passes–Imagine if you had to find it on your own!–they finally gave us permission to track and store their search and web history so that we could give them better and better Cards. And then there is the imaging. They gave us tens of thousands of pictures of themselves so that we could pick the best ones–yes we appealed to their vanity to do this: We’ll make you look better and assure you present a smiling, wrinkle-free face to the world–but it allowed us to also stitch together three-dimensional representations. Hangout chats let us know who everybody’s friends were, and what they had to say to them. Verbal searches gave us our users’ voices. These were intermediary steps. But it let us know where people were at all times, what they thought, what they said, and of course how they looked. Sure, Google Now could tell you what to do. But Google Being will literally do it for you.

“My Google Being anticipates everything I would think, everything I would want to say or do or feel,” Larry explained. “Everywhere I would go. Years of research have gone into this. It is in every way the same as me. So much so that my physical form is no longer necessary. It was just getting in the way, so we removed it. Keep in mind that for now at least, Google Being is just a developer product.”

Not only is this a snarky critique of Page’s recent comments, it also pairs nicely with the Programmable World piece.

What’s the goal of the Programmable World anyway?  Is it that all of us in the developed world (because, of course, whole swaths of the human population will take no part in this vision) get to sleepwalk through our lives, freed from as many decisions and actions as possible? Better yet, is it the perpetual passive documentation of an automated life which is algorithmically predicted and preformed for me by some future fusion of Google Now and the Programmable World.


Ref: The Programmable Island of Google Being – The Frailest Thing
Ref: Welcome to Google Island – Wired

Why We Need an Algorithm Ethic

The way the company [Facebook] handles its customer data seems highly dubious, but because of its size we should therefore come round to the idea that this type of data-driven, highly personalised portal for information and communication is not likely to disappear.

And why should it? It isn’t only the advertising industry that’s inspired by the opportunities, but also the users. After all, not one of Facebook’s 800 million customers was forced to open an account and use it for a daily average of 20 minutes. It is on an equally voluntary basis that user posts the location of their favourite cafe on Foursquare to tell the whole world where they are at any given time, or upload jogging routes to the internet to inform the world of every metre taken. People love these services and feed the algorithms and databases with great enthusiasm because they want to share their data with the world.


Relevance is the reason why you see more and more people on the train with the paper in their lap while they hold their mobile in front of it and flick through their Twitter stream. Relevance is the reason why more hotel bookings are now made through recommendation platforms than all travel agents put together. It’s the reason why readers will prefer personalised news websites to traditional media.


Transparency is one of the most important principles when it comes to throwing light on the chaos. Algorithms have to be made transparent – in how they are implemented as well as how they work.


Ref: Why we need an algorithm ethic – The Guardian

Amplified Intelligence


The real objective of IA is to create super-Einsteins, persons qualitatively smarter than any human being that has ever lived. There will be a number of steps on the way there.

 The first step will be to create a direct neural link to information. Think of it as a “telepathic Google.”

The next step will be to develop brain-computer interfaces that augment the visual cortex, the best-understood part of the brain. This would boost our spatial visualization and manipulation capabilities. Imagine being able to imagine a complex blueprint with high reliability and detail, or to learn new blueprints quickly. There will also be augmentations that focus on other portions of sensory cortex, like tactile cortex and auditory cortex.

The third step involves the genuine augmentation of pre-frontal cortex. This is the Holy Grail of IA research — enhancing the way we combine perceptual data to form concepts. The end result would be cognitive super-McGyvers, people who perform apparently impossible intellectual feats. For instance, mind controlling other people, beating the stock market, or designing inventions that change the world almost overnight. This seems impossible to us now in the same way that all our modern scientific achievements would have seemed impossible to a stone age human — but the possibility is real.

Ref: Humans With Amplified Intelligence Could Be More Powerful Than AI – io9

RoboRoach: Control a Living Insect from your Smartphone!


Control the movements of a live cockroach from your own mobile device! This is the world’s first commercially available cyborg!

When you send the command from your mobile phone, the backpack sends pulses to the antenna, which causes the neurons to fire, which causes the roach to think there is a wall on one side. The result? The roach turns! Microstimulation is the same neurotechnology that is used to treat Parkinson’s Disease and is also used in Cochlear Implants.


Ref: The RoboRoach: Control a living insect from your smartphone! – Kickstarter

Predictive Apps


Apps that proactively help people with their lives represent a significant departure from earlier approaches to software.

A new type of mobile app is departing from a long-standing practice in computing. Typically, computers have just dumbly waited for their human operators to ask for help. But now applications based on machine learning software can speak up with timely information even without being directly asked for it. They might automatically pull up a boarding pass for your flight just as you arrive at the airport, or tell you that current traffic conditions require you to leave for your next meeting within 10 minutes.


These apps benefit from improved data mining techniques, but they’re also succeeding partly because of how they are presented to users. They are not cast as artificial butlers, a staple of science fiction that Apple tried to mimic with the voice-operated app Siri in 2010. Instead, Apps like Google Now are intentionally made without personality and don’t pretend to be people.


But still, it represents a milestone in computing, she adds: “Google Now is kind of a sucky product, but I use it anyway. It’s important because it’s the first time Google has taken all they know about us to make a product that makes our lives better.