How the Pentagon’s Skynet Would Automate War

Due to technological revolutions outside its control, the Department of Defense (DoD) anticipates the dawn of a bold new era of automated war within just 15 years. By then, they believe, wars could be fought entirely using intelligent robotic systems armed with advanced weapons.

Last week, US defense secretary Chuck Hagel announced the ‘Defense Innovation Initiative’—a sweeping plan to identify and develop cutting edge technology breakthroughs “over the next three to five years and beyond” to maintain global US “military-technological superiority.” Areas to be covered by the DoDprogrammeinclude robotics, autonomous systems,miniaturization, Big Data and advanced manufacturing, including 3D printing.

[…]

A key area emphasized by the Wells and Kadtke study is improving the US intelligence community’s ability to automatically analyze vast data sets without the need for human involvement.

Pointing out that “sensitive personal information” can now be easily mined from online sources and social media, they call for policies on “Personally Identifiable Information (PII) to determine the Department’s ability to make use of information from social media in domestic contingencies”—in other words, to determine under what conditions the Pentagon can use private information on American citizens obtained via data-mining of Facebook, Twitter, LinkedIn, Flickr and so on.

Their study argues that DoD can leverage “large-scale data collection” for medicine and society, through “monitoring of individuals and populations using sensors, wearable devices, and IoT [the ‘Internet of Things’]” which together “will provide detection and predictive analytics.” The Pentagon can build capacity for this “in partnership with large private sector providers, where the most innovative solutions are currently developing.”

[…]

Within this context of Big Data and cloud robotics, Kadtke and Wells enthuse that as unmanned robotic systems become more intelligent, the cheap manufacture of “armies of Kill Bots that can autonomously wage war” will soon be a reality. Robots could also become embedded in civilian life to perform “surveillance, infrastructure monitoring,policetelepresence, and homeland security applications.”

[…]

Perhaps the most disturbing dimension among the NDU study’s insights is the prospect that within the next decade, artificial intelligence (AI) research could spawn “strong AI”—or at least a form of “weak AI” that approximates some features of the former.

Strong AI should be able to simulate a wide range of human cognition, and include traits like consciousness, sentience, sapience, or self-awareness. Many now believe, Kadtke and Wells, observe, that “strong AI may be achieved sometime in the 2020s.”

[…]

Nearly half the people on the US government’s terrorism watch list of “known or suspected terrorists” have “no recognized terrorist group affiliation,” and more than half the victims of CIA drone-strikes over a single year were “assessed” as “Afghan, Pakistani and unknown extremists”—among others who were merely “suspected, associated with, or who probably” belonged to unidentified militant groups. Multiple studies show that a substantive number of drone strike victims are civilians—and a secret Obama administration memo released this summer under Freedom of Information reveals that thedroneprogramme authorizes the killing of civilians as inevitable collateral damage.

Indeed, flawed assumptions in the Pentagon’s classification systems for threat assessment mean that even “nonviolent political activists” might be conflated with potential ‘extremists’, who “support political violence” and thus pose a threat to US interests.

 

Ref: How the Pentagon’s Skynet Would Automate War – Motherboard