Category Archives: W – future

Predicting the Future with Data from the Past

 

It’s not the mathematics. Turchin says his methods aren’t very complex. He’s using common statistical techniques like spectrum analysis — “I used much more sophisticated statistical methods in ecology,” he says. And it’s not “big data” tools. The data sets he’s using aren’t all that big. He can analyze them using ordinary statistical software. But he couldn’t have built these models even a few decades ago because historians and archivists have only recently started digitizing newspapers and public records from throughout history and putting them online. That gives cliodynamics the opportunity to quantify what has happened in the past — and make predictions based on that data.

[…]

What Turchin and his colleagues have found is a pattern of social instability. It applies to all agrarian states for which records are available, including Ancient Rome, Dynastic China, Medieval England, France, Russia, and, yes, the United States. Basically, the data shows 100 year waves of instability, and superimposed on each wave — which Turchin calls the “Secular Cycle” — there’s typically an additional 50-year cycle of widespread political violence. The 50-year cycles aren’t universal — they don’t appear in China, for instance. But they do appear in the United States.

[…]

Turchin takes pains to emphasize that the cycles are not the result of iron-clad rules of history, but of feedback loops — just like in ecology. “In a predator-prey cycle, such as mice and weasels or hares and lynx, the reason why populations go through periodic booms and busts has nothing to do with any external clocks,” he writes. “As mice become abundant, weasels breed like crazy and multiply. Then they eat down most of the mice and starve to death themselves, at which point the few surviving mice begin breeding like crazy and the cycle repeats.”

 

Ref: Mathematicians Predict the Future With Data From the Past – Wired

The Programmable World – Google Being

 

“You are with my Google Being. I’m not physically here, but I am present. Unified logins let us get to know our audience in ways we never could before. They gave us their locations so that we might better tell them if it was raining outside. They told us where they lived and where they wanted to go so that we could deliver a more immersive map that better anticipated what they wanted to do–it let us very literally tell people what they should do today. As people began to see how very useful Google Now was, they began to give us even more information. They told us to dig through their e-mail for their boarding passes–Imagine if you had to find it on your own!–they finally gave us permission to track and store their search and web history so that we could give them better and better Cards. And then there is the imaging. They gave us tens of thousands of pictures of themselves so that we could pick the best ones–yes we appealed to their vanity to do this: We’ll make you look better and assure you present a smiling, wrinkle-free face to the world–but it allowed us to also stitch together three-dimensional representations. Hangout chats let us know who everybody’s friends were, and what they had to say to them. Verbal searches gave us our users’ voices. These were intermediary steps. But it let us know where people were at all times, what they thought, what they said, and of course how they looked. Sure, Google Now could tell you what to do. But Google Being will literally do it for you.

“My Google Being anticipates everything I would think, everything I would want to say or do or feel,” Larry explained. “Everywhere I would go. Years of research have gone into this. It is in every way the same as me. So much so that my physical form is no longer necessary. It was just getting in the way, so we removed it. Keep in mind that for now at least, Google Being is just a developer product.”

Not only is this a snarky critique of Page’s recent comments, it also pairs nicely with the Programmable World piece.

What’s the goal of the Programmable World anyway?  Is it that all of us in the developed world (because, of course, whole swaths of the human population will take no part in this vision) get to sleepwalk through our lives, freed from as many decisions and actions as possible? Better yet, is it the perpetual passive documentation of an automated life which is algorithmically predicted and preformed for me by some future fusion of Google Now and the Programmable World.

 

Ref: The Programmable Island of Google Being – The Frailest Thing
Ref: Welcome to Google Island – Wired

Amplified Intelligence

 

The real objective of IA is to create super-Einsteins, persons qualitatively smarter than any human being that has ever lived. There will be a number of steps on the way there.

 The first step will be to create a direct neural link to information. Think of it as a “telepathic Google.”

The next step will be to develop brain-computer interfaces that augment the visual cortex, the best-understood part of the brain. This would boost our spatial visualization and manipulation capabilities. Imagine being able to imagine a complex blueprint with high reliability and detail, or to learn new blueprints quickly. There will also be augmentations that focus on other portions of sensory cortex, like tactile cortex and auditory cortex.

The third step involves the genuine augmentation of pre-frontal cortex. This is the Holy Grail of IA research — enhancing the way we combine perceptual data to form concepts. The end result would be cognitive super-McGyvers, people who perform apparently impossible intellectual feats. For instance, mind controlling other people, beating the stock market, or designing inventions that change the world almost overnight. This seems impossible to us now in the same way that all our modern scientific achievements would have seemed impossible to a stone age human — but the possibility is real.



Ref: Humans With Amplified Intelligence Could Be More Powerful Than AI – io9

Zoë

 

There is a lot of work on virtual heads, or avatars, at the moment – you can even use Microsoft’s Xbox Kinect system to create a virtual you to put in a game. But the team behind Zoe believe they have gone a step further by giving Zoe a range of human emotions expressed in her face and voice.

[…]

Professor Cipolla says Zoe is “the interface of the future”, part of a trend towards abandoning the keyboard and mouse and finding new ways of relating to computers.

[…]

Dr Bjorn Stenger, once one of Professor Cipolla’s doctoral students and now employed at the Toshiba lab, sees a number of uses: “Sending messages to your friends with your face on it,” he suggests. Virtual actors or game characters are another possibility – and then there is the prospect of virtual carers or call centre employees.

Losing Humanity: The Case against Killer Robots

On November 21, 2012, the US Department of Defense issued its first public policy on autonomy in weapons systems. Directive Number 3000.09 (the Directive) lays out guidelines for the development and use of autonomous and semi-autonomous weapon systems by the Department of Defense. The Directive also represents the first policy announcement by any country on fully autonomous weapons, which do not yet exist but would be designed to select and engage targets without human intervention.

The Directive does not put in place such a preemptive ban. For a period of up to ten years, however, it allows the Department of Defense to develop or use only fully autonomous systems that deliver non-lethal force, unless department officials waive the policy at a high level. Importantly, the Directive also recognizes some of the dangers to civilians of fully autonomous weapons and the need for prohibitions or controls, including the basic requirement that a human being be “in the loop” when decisions are made to use lethal force. The Directive is in effect a moratorium on fully autonomous weapons with the possibility for certain waivers. It also establishes guidelines for other types of autonomous and semi-autonomous systems.

While a positive step, the Directive does not resolve the moral, legal, and practical problems posed by the potential development of fully autonomous systems. As noted, it is initially valid for a period of only five to ten years, and may be overridden by high level Pentagon officials. It establishes testing requirements that may be unfeasible, fails to address all technological concerns, and uses ambiguous terms. It also appears to allow for transfer of fully autonomous systems to other nations and does not apply to other parts of the US government, such as the Central Intelligence Agency (CIA). Finally, it lays out a policy of voluntary self-restraint that may not be sustainable if other countries begin to deploy fully autonomous weapons systems, and the United States feels pressure to follow suit.

 

Ref: Review of the 2012 US Policy on Autonomy in Weapons Systems – Human Rights Watch
Ref: Say no to killer robots – The Engineer
Ref: Losing Humanity: The Case against Killer Robots – Human Rights Watch

Kurzweil on the Computers That Will Live in our Brains

“I think we’re going to ultimately move beyond these little devices that are like looking at the world through a keyhole,” Futurist Ray Kurzweil, the director of engineering at Google, says. “You’ll be online all the time. Google Glass is a solid first step.”

[…]

“Ultimately these devices will be the size of blood cells, we’ll be able to send them inside our brain through the capillaries, and basically connect up brain to the cloud,” Kurzweil says. “But that’s a mid-2030’s scenario.”

In Kurzweil’s vision, these advances don’t simply bring computers closer to our biological systems. Machines become more like us. “Your personality, your skills are contained in information in your neocortex, and it is information,” Kurzweil says. “These technologies will be a million times more powerful in 20 years and we will be able to manipulate the information inside your brain.”

He has a particular message for those who fear increasing sophisicated artificial intelligence.

“When computers can achieve these things it’s not for the purpose of displacing us it’s really to make ourselves smarter,” Kurzweil says. “And smarter in the sense of being more loving… Really enhancing the things that we value about humans.”

Today, he [Ray Kurzweil] envisions a “cybernetic friend” that listens in on your phone conversations, reads your e-mail, and tracks your every move—if you let it, of course—so it can tell you things you want to know even before you ask. This isn’t his immediate goal at Google, but it matches that of Google cofounder Sergey Brin, who said in the company’s early days that he wanted to build the equivalent of the sentient computer HAL in 2001: A Space Odyssey—except one that wouldn’t kill people.

For now, Kurzweil aims to help computers understand and even speak in natural language. “My mandate is to give computers enough understanding of natural language to do useful things—do a better job of search, do a better job of answering questions,” he says. Essentially, he hopes to create a more flexible version of IBM’s Watson, which he admires for its ability to understand Jeopardy!queries as quirky as “a long, tiresome speech delivered by a frothy pie topping.” (Watson’s correct answer: “What is a meringue harangue?”)

 

Ref: Google’s Ray Kurzweil on the computers that will live in our brains – MarketPlace
Ref: Deep Learning – MIT Technology Review

Google Wants to Build the Star Trek Computer

 

So I went to Google to interview some of the people who are working on its search engine. And what I heard floored me. “The Star Trek computer is not just a metaphor that we use to explain to others what we’re building,” Singhal told me. “It is the ideal that we’re aiming to build—the ideal version done realistically.” He added that the search team does refer to Star Trek internally when they’re discussing how to improve the search engine. “It comes up often,” Singhal said. “For instance, we might say, ‘Captain Kirk never pulled out a keyboard to ask a question.’ So in that way it becomes one of the design principles—we see that because the Star Trek computer actively relies on speech, if we want to do that we need to work to push the barrier of speech recognition and machine understanding.”

 […]

What does it mean that Google really is trying to build the Star Trek computer? I take it as a cue to stop thinking about Google as a “search engine.” That term conjures a staid image: a small box on a page in which you type keywords. A search engine has several key problems. First, most of the time it doesn’t give you an answer—it gives you links to an answer. Second, it doesn’t understand natural language; when you search, you’ve got to adopt the search engine’s curious, keyword-laden patois. Third, and perhaps most importantly, a search engine needs for you to ask it questions—it doesn’t pipe in with information when you need it, without your having to ask.

The Star Trek computer worked completely differently. It understood language and was conversational, it gave you answers instead of references to answers, and it anticipated your needs. “It was the perfect search engine,” Singhal said. “You could ask it a question and it would tell you exactly the right answer, one right answer—and sometimes it would tell you things you needed to know in advance, before you could ask it.”

How Will Driverless Cars Affects our Cities

 

Google is the most conspicuous developer of autonomous vehicles, but it is hardly alone in pursuing this venture. Most automakers are competing to introduce their own driverless cars to the public, and are doing so piecemeal, system by system. The components of the upcoming driverless car are being introduced into current models as ever more elaborate mechanisms to aid the driver, such as self-parking features and automated collision avoidance systems. Recently, a group of researchers at Oxford University developed a self-driving system which can be installed in existing manually driven vehicles, and whose cost is hoped to fall as low as 150 dollars within a matter of years.

Driverless cars will make it less “costly” for people to travel a given geographic distance, partly because they will be free to engage in other activities while travelling, but primarily because of reductions in travel time. Unlike human drivers, autonomous vehicles will follow optimal routes given real-time traffic conditions without fail. More crucially, as soon as suitable roads such as freeways (or lanes thereof) are declared off limits to manual driving, driverless cars will travel – safely – at much higher speeds than we do today. Gains in efficiency will follow from coordinated traffic management protocols, too. Once vehicles communicate with each other traffic through intersections and merges will flow much more smoothly than permitted by today’s traffic signals, stop signs and merging lanes, leading to substantial gains in travel time.

 

Ref: How Will Driverless Cars Affects our Cities – Meeting of the Minds

Human Extinction or a Future Among the Stars?

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can’t picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent. If its goal is to win at chess, an AI is going to model chess moves, make predictions about their success, and select its actions accordingly. It’s going to be ruthless in achieving its goal, but within a limited domain: the chessboard. But if your AI is choosing its actions in a larger domain, like the physical world, you need to be very specific about the goals you give it.

[…]

It is tempting to think that programming empathy into an AI would be easy, but designing a friendly machine is more difficult than it looks. You could give it a benevolent goal — something cuddly and utilitarian, like maximising human happiness. But an AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with non-lethal doses of heroin is the best way to maximise your happiness. It might also predict that shortsighted humans will fail to see the wisdom of its interventions. It might plan out a sequence of cunning chess moves to insulate itself from resistance. Maybe it would surround itself with impenetrable defences, or maybe it would confine humans — in prisons of undreamt of efficiency.

[…]

‘Let’s say you have an Oracle AI that makes predictions, or answers engineering questions, or something along those lines,’ Dewey told me. ‘And let’s say the Oracle AI has some goal it wants to achieve. Say you’ve designed it as a reinforcement learner, and you’ve put a button on the side of it, and when it gets an engineering problem right, you press the button and that’s its reward. Its goal is to maximise the number of button presses it receives over the entire future. See, this is the first step where things start to diverge a bit from human expectations. We might expect the Oracle AI to pursue button presses by answering engineering problems correctly. But it might think of other, more efficient ways of securing future button presses. It might start by behaving really well, trying to please us to the best of its ability. Not only would it answer our questions about how to build a flying car, it would add safety features we didn’t think of. Maybe it would usher in a crazy upswing for human civilisation, by extending our lives and getting us to space, and all kinds of good stuff. And as a result we would use it a lot, and we would feed it more and more information about our world.’

 

Ref: Omens. When we peer into the fog of the deep future what do we see – human extinction or a future among the stars? – Aeon Magazine