Category Archives: W – future

How the Pentagon’s Skynet Would Automate War

Due to technological revolutions outside its control, the Department of Defense (DoD) anticipates the dawn of a bold new era of automated war within just 15 years. By then, they believe, wars could be fought entirely using intelligent robotic systems armed with advanced weapons.

Last week, US defense secretary Chuck Hagel announced the ‘Defense Innovation Initiative’—a sweeping plan to identify and develop cutting edge technology breakthroughs “over the next three to five years and beyond” to maintain global US “military-technological superiority.” Areas to be covered by the DoDprogrammeinclude robotics, autonomous systems,miniaturization, Big Data and advanced manufacturing, including 3D printing.

[…]

A key area emphasized by the Wells and Kadtke study is improving the US intelligence community’s ability to automatically analyze vast data sets without the need for human involvement.

Pointing out that “sensitive personal information” can now be easily mined from online sources and social media, they call for policies on “Personally Identifiable Information (PII) to determine the Department’s ability to make use of information from social media in domestic contingencies”—in other words, to determine under what conditions the Pentagon can use private information on American citizens obtained via data-mining of Facebook, Twitter, LinkedIn, Flickr and so on.

Their study argues that DoD can leverage “large-scale data collection” for medicine and society, through “monitoring of individuals and populations using sensors, wearable devices, and IoT [the ‘Internet of Things’]” which together “will provide detection and predictive analytics.” The Pentagon can build capacity for this “in partnership with large private sector providers, where the most innovative solutions are currently developing.”

[…]

Within this context of Big Data and cloud robotics, Kadtke and Wells enthuse that as unmanned robotic systems become more intelligent, the cheap manufacture of “armies of Kill Bots that can autonomously wage war” will soon be a reality. Robots could also become embedded in civilian life to perform “surveillance, infrastructure monitoring,policetelepresence, and homeland security applications.”

[…]

Perhaps the most disturbing dimension among the NDU study’s insights is the prospect that within the next decade, artificial intelligence (AI) research could spawn “strong AI”—or at least a form of “weak AI” that approximates some features of the former.

Strong AI should be able to simulate a wide range of human cognition, and include traits like consciousness, sentience, sapience, or self-awareness. Many now believe, Kadtke and Wells, observe, that “strong AI may be achieved sometime in the 2020s.”

[…]

Nearly half the people on the US government’s terrorism watch list of “known or suspected terrorists” have “no recognized terrorist group affiliation,” and more than half the victims of CIA drone-strikes over a single year were “assessed” as “Afghan, Pakistani and unknown extremists”—among others who were merely “suspected, associated with, or who probably” belonged to unidentified militant groups. Multiple studies show that a substantive number of drone strike victims are civilians—and a secret Obama administration memo released this summer under Freedom of Information reveals that thedroneprogramme authorizes the killing of civilians as inevitable collateral damage.

Indeed, flawed assumptions in the Pentagon’s classification systems for threat assessment mean that even “nonviolent political activists” might be conflated with potential ‘extremists’, who “support political violence” and thus pose a threat to US interests.

 

Ref: How the Pentagon’s Skynet Would Automate War – Motherboard

When Will We Let Go and Let Google Drive Us?

According to Templeton, regulators and policymakers are proving more open to the idea than expected—a number of US states have okayed early driverless cars for public experimentation, along with Singapore, India, Israel, and Japan—but earning the general public’s trust may be a more difficult battle to win.

No matter how many fewer accidents occur due to driverless cars, there may well be a threshold past which we still irrationally choose human drivers over them. That is, we may hold robots to a much higher standard than humans.

This higher standard comes at a price. “People don’t want to be killed by robots,” Templeton said. “They want to be killed by drunks.”

It’s an interesting point—assuming the accident rate is nonzero (and it will be), how many accidents are we willing to tolerate in driverless cars, and is that number significantly lower than the number we’re willing to tolerate with human drivers?

Let’s say robot cars are shown to reduce accidents by 20%. They could potentially prevent some 240,000 accidents (using Templeton’s global number). That’s a big deal. And yet if (fully) employed, they would still cause nearly a million accidents a year. Who would trust them? And at what point does that trust kick in? How close to zero accidents does it have to get?

And it may turn out that the root of the problem lies not with the technology but us.

Ref: Summit Europe: When Will We Let Go and Let Google Drive Us? – SingularityHub

Artificial General Intelligence (AGI)

But how important is self-awareness, really, in creating an artificial mind on par with ours? According to quantum computing pioneer and Oxford physicist David Deutsch, not very.

In an excellent article in Aeon, Deutsch explores why artificial general intelligence (AGI) must be possible, but hasn’t yet been achieved. He calls it AGI to emphasize that he’s talking about a mind like ours, that can think and feel and reason about anything, as opposed to a complex computer program that’s very good at one or a few human-like tasks.

Simply put, his argument for why AGI is possible is this: Since our brains are made of matter, it must be possible, in principle at least, to recreate the functionality of our brains using another type of matter — specifically circuits.

As for Skynet’s self-awareness, Deutsch writes:

That’s just another philosophical misconception, sufficient in itself to block any viable approach to AGI. The fact is that present-day software developers could straightforwardly program a computer to have ‘self-awareness’ in the behavioural sense — for example, to pass the ‘mirror test’ of being able to use a mirror to infer facts about itself — if they wanted to. As far as I am aware, no one has done so, presumably because it is a fairly useless ability as well as a trivial one.

[…]

If we really want to create artificial intelligence, we have to understand what it is we’re trying to create. Deutsch persuasively argues that, as long as we’re focused on self-awareness, we miss out on understanding how our brains actually work, stunting our ability to create artificially intelligent machines.

What matters, Deutsch argues, is “the ability to create new explanations,” to generate theories about the world and all its particulars. In contrast with this, the idea that self-awareness — let alone real intelligence — will spontaneously emerge from a complex computer network is not just science fiction. It’s pure fantasy.

 

Ref: Terminator is Wrong about AI Self-Awareness – BusinessInsider

IEEE Computer Society Report on the Future

At the foundation of the report is our understanding that by 2022, we will be well into a phase where intelligence becomes seamless and ubiquitous to those who can afford and use state-of-the-art information technology. At the heart of the “seamless intelligence” revolution is seamless networking, where the transition from one network device to another is transparent and uninterrupted. To achieve seamlessness and realize logical end-to-end connectivity, we’ll need communications to run independently on top of any form of physical networking, regardless of device or location.

[…]

In the future that we envision, multicore will allow us to recharge our smartphones just once a month. The Internet of Things will let us dress in clothes that monitor all our activities. Nanotechnology will enable lives to be saved by digestible cameras and machines made from particles 50,000 times as small as a human hair. And amid the exponential growth of large data repositories will be increasing concerns about balancing convenience and privacy.

The potential for quantum computing is staggering since it’s constrained only by the laws of physics. Universal memory replacements for DRAM will cause a tectonic shift in architectures and software. 3D printing will create a revolution in fabrication, with many opportunities to produce designs that would have been prohibitively expensive.

We predict that machine learning will play an increasingly important role in our lives, whether by ranking search results, recommending products, or building better models of the environment. And medical robotics will lead to many lifesaving innovations, from autonomous delivery of hospital supplies to telemedicine and advanced prostheses.

 

Ref: Which Technologies Will Dominate in 2022? – Wired
Ref: IEEE CS 2022

 

Can a Robot Learn Right from Wrong?

There is no right answer to the trolley hypothetical — and even if there was, many roboticists believe it would be impractical to predict each scenario and program what the robot should do.

“It’s almost impossible to devise a complex system of ‘if, then, else’ rules that cover all possible situations,” says Matthias Scheutz, a computer science professor at Tufts University. “That’s why this is such a hard problem. You cannot just list all the circumstances and all the actions.”

Instead, Scheutz is trying to design robot brains that can reason through a moral decision the way a human would. His team, which recently received a$7.5 million grant from the Office of Naval Research (ONR), is planning an in-depth survey to analyze what people think about when they make a moral choice. The researchers will then attempt to simulate that reasoning in a robot.

At the end of the five-year project, the scientists must present a demonstration of a robot making a moral decision. One example would be a robot medic that has been ordered to deliver emergency supplies to a hospital in order to save lives. On the way, it meets a soldier who has been badly injured. Should the robot abort the mission and help the soldier?

For Scheutz’s project, the decision the robot makes matters less than the fact that it can make a moral decision and give a coherent reason why — weighing relevant factors, coming to a decision, and explaining that decision after the fact. “The robots we are seeing out there are getting more and more complex, more and more sophisticated, and more and more autonomous,” he says. “It’s very important for us to get started on it. We definitely don’t want a future society where these robots are not sensitive to these moral conflicts.”

[…]

For the ONR grant, Arkin and his team proposed a new approach. Instead of using a rule-based system like the ethical governor or a “folk psychology” approach like Scheutz’s, Arkin’s team wants to study moral development in infants. Those lessons would be integrated into the Soar architecture, a popular cognitive system for robots that employs both problem-solving and overarching goals.

 

Ref: Can a Robot Learn Right from Wrong? – TheVerge

The Trick That Makes Google’s Self-Driving Cars Work

The key to Google’s success has been that these cars aren’t forced to process an entire scene from scratch. Instead, their teams travel and map each road that the car will travel. And these are not any old maps. They are not even the rich, road-logic-filled maps of consumer-grade Google Maps.

[…]

Google has created a virtual world out of the streets their engineers have driven. They pre-load the data for the route into the car’s memory before it sets off, so that as it drives, the software knows what to expect.

“Rather than having to figure out what the world looks like and what it means from scratch every time we turn on the software, we tell it what the world is expected to look like when it is empty,” Chatham continued. “And then the job of the software is to figure out how the world is different from that expectation. This makes the problem a lot simpler.”

[…]

All this makes sense within the broader context of Google’s strategy. Google wants to make the physical world legible to robots, just as it had to make the web legible to robots (or spiders, as they were once known) so that they could find what people wanted in the pre-Google Internet of yore.

 

Ref: The Trick That Makes Google’s Self-Driving Cars Work – TheAtlantic

Robot Cars With Adjustable Ethics Settings

So why not let the user select the car’s “ethics setting”? The way this would work is one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible.

Plus, with an adjustable ethics dial set by the customer, the manufacturer presumably can’t be blamed for hard judgment calls, especially in no-win scenarios, right? In one survey, 44 percent of the respondents preferred to have a personalized ethics setting, while only 12 percent thought the manufacturer should predetermine the ethical standard. So why not give customers what they want?

[…]

So, an ethics setting is not a quick workaround to the difficult moral dilemma presented by robotic cars. Other possible solutions to consider include limiting manufacturer liability by law, similar to legal protections for vaccine makers, since immunizations are essential for a healthy society, too. Or if industry is unwilling or unable to develop ethics standards, regulatory agencies could step in to do the job—but industry should want to try first.

With robot cars, we’re trying to design for random events that previously had no design, and that takes us into surreal territory. Like Alice’s wonderland, we don’t know which way is up or down, right or wrong. But our technologies are powerful: they give us increasing omniscience and control to bring order to the chaos. When we introduce control to what used to be only instinctive or random—when we put God in the machine—we create new responsibility for ourselves to get it right.

 

Ref: Here’s a Terrible Idea: Robot Cars With Adjustable Ethics Settings – Wired

Cycorp.

IBM’s Watson and Apple’s Siri stirred up a hunger and awareness throughout the U.S. for something like a Star Trek computer that really worked — an artificially intelligent system that could receive instructions in plain, spoken language, make the appropriate inferences, and carry out its instructions without needing to have millions and millions of subroutines hard-coded into it.

As we’ve established, that stuff is very hard. But Cycorp’s goal is to codify general human knowledge and common sense so that computers might make use of it.

Cycorp charged itself with figuring out the tens of millions of pieces of data we rely on as humans — the knowledge that helps us understand the world — and to represent them in a formal way that machines can use to reason. The company’s been working continuously since 1984 and next month marks its 30th anniversary.

“Many of the people are still here from 30 years ago — Mary Shepherd and I started [Cycorp] in August of 1984 and we’re both still working on it,” Lenat said. “It’s the most important project one could work on, which is why this is what we’re doing. It will amplify human intelligence.”

It’s only a slight stretch to say Cycorp is building a brain out of software, and they’re doing it from scratch.

“Any time you look at any kind of real life piece of text or utterance that one human wrote or said to another human, it’s filled with analogies, modal logic, belief, expectation, fear, nested modals, lots of variables and quantifiers,” Lenat said. “Everyone else is looking for a free-lunch way to finesse that. Shallow chatbots show a veneer of intelligence or statistical learning from large amounts of data. Amazon and Netflix recommend books and movies very well without understanding in any way what they’re doing or why someone might like something.

“It’s the difference between someone who understands what they’re doing and someone going through the motions of performing something.”

 

Ref: The Most Ambitious Artificial Intelligence Project In The World Has Been Operating In Near Secrecy For 30 Years – BusinessInsider

MACHINES TEACH HUMANS HOW TO FEEL USING NEUROFEEDBACK

 

Yet, some people, often as the result of traumatic experiences or neglect, don’t experience these fundamental social feelings normally. Could a machine teach them these quintessentially human responses? A thought-provoking Brazilian study recently published in PLoS One suggests it could.

Researchers at the D’Or Institute for Research and Education outside Rio de Janeiro, Brazil, performed functional MRI scans on healthy young adults while asking them to focus on past experience that epitomized feelings of non-sexual affection or pride of accomplishment. They set up a basic form of artificial intelligence to categorize, in real time, the fMRI readings as affection, pride or neither. They then showed the experiment group a graphic form of biofeedback to tell them whether their brain results were fully manifesting that feeling; the control group saw the meaningless graphics.

The results demonstrated that the machine-learning algorithms were able to detect complex emotions that stem from neurons in various parts of the cortex and sub-cortex, and the participants were able to hone their feelings based on the feedback, learning on command to light up all of those brain regions.

[…]

Here we must pause to note that the experiment’s artificial intelligence system’s likeness to the “empathy box” in “Blade Runner” and the Philip K. Dick story on which it’s based did not escape the researchers. Yes, the system could potentially be used to subject a person’s inner feelings to interrogation by intrusive government bodies, which is really about as creepy as it gets. It could, to cite that other dystopian science fiction blockbuster, “Minority Report,” identify criminal tendencies and condemn people even before they commit crimes.

 

Ref: MACHINES TEACH HUMANS HOW TO FEEL USING NEUROFEEDBACK – SingularityHub

Algorithm Hunts Rare Genetic Disorders from Facial Features in Photos

 

Even before birth, concerned parents often fret over the possibility that their children may have underlying medical issues. Chief among these worries are rare genetic conditions that can drastically shape the course and reduce the quality of their lives. While progress is being made in genetic testing, diagnosis of many conditions occurs only after symptoms manifest, usually to the shock of the family.

A new algorithm, however, is attempting to identify specific syndromes much sooner by screening photos for characteristic facial features associated with specific genetic conditions, such as Down’s syndrome, Progeria, and Fragile X syndrome.

[…]

Nellåker added, “A doctor should in future, anywhere in the world, be able to take a smartphone picture of a patient and run the computer analysis to quickly find out which genetic disorder the person might have.”

 

Ref: ALGORITHM HUNTS RARE GENETIC DISORDERS FROM FACIAL FEATURES IN PHOTOS – SingularityHub