Category Archives: T – war

Hackers Can Disable a Sniper Rifle—Or Change Its Target

At the Black Hat hacker conference in two weeks, security researchers Runa Sandvik and Michael Auger plan to present the results of a year of work hacking a pair of $13,000 TrackingPoint self-aiming rifles. The married hacker couple have developed a set of techniques that could allow an attacker to compromise the rifle via its Wi-Fi connection and exploit vulnerabilities in its software. Their tricks can change variables in the scope’s calculations that make the rifle inexplicably miss its target, permanently disable the scope’s computer, or even prevent the gun from firing.

Ref: Hackers Can Disable a Sniper Rifle—Or Change Its Target – Wired

Variable World: Bay Model Tour & Salon

What’s also tricky here is the scale, finding a way to accurately grasp the forces going on within the model, and outside it of course. It’s easy to prototype and model something at 1:1, but this one is 1:1000, at least in the horizontal plane. So when you think about granularity within computer models that simulate climate, especially in the Bay area, there are micro-climates everywhere that are often collapsed or ignored simplistically. How do you create a realistic model that represents, and can project what’s actually happening? It’s challenging to integrate that into the computer model. The digital model only has a certain resolution and you can throw situations at it and sometimes get the expected results that verify your projections, and maybe the knock-on effects seem right, but really how it works in nature is often surprisingly divergent from the model. The state of the art climate model is really just the one that best conforms to a limited set of validation data. And based on these models we make determinations of risk for insurance purposes; they influence policy making and ultimately come back to us as decision making tools. I am fascinated by the scale of decision making we base on models and the latent potential for contingency.

It also seems that variability is an important part in how authority gets built up in the model. I’m remembering the moment when we were standing there and the guide said, “This is a perfect world, it doesn’t change.” She emphasized that a few times.

Ref: Variable World: Bay Model Tour & Salon – AVANT

NSA’s Skynet

As The Intercept reports today, the NSA does have a program called Skynet. But unlike the autonomous, self-aware computerized defense system inTerminator that goes rogue and launches a nuclear attack that destroys most of humanity, this one is a surveillance program that uses phone metadata to track the location and call activities of suspected terrorists. A journalist for Al Jazeera reportedly became one of its targets after he was placed on a terrorist watch list.


Ahmad Muaffaq Zaidan, bureau chief for Al Jazeera’s Islamabad office, got tracked by Skynet after he was identified by US intelligence as a possible Al Qaeda member and assigned a watch list number. A Syrian national, Zaidan has scored a number of exclusive interviews with senior Al Qaeda leaders, including Osama bin Laden himself.

Skynet uses phone location and call metadata from bulk phone call records to detect suspicious patterns in the physical movements of suspects and their communication habits, according to a 2012 government presentation The Intercept obtained from Edward Snowden.

The presentation indicates that Skynet looks for terrorist connections based on questions such as “who has traveled from Peshawar to Faisalabad or Lahore (and back) in the past month? Who does the traveler call when he arrives?” It also looks for suspicious behaviors such as someone who engages in “excessive SIM or handset swapping” or receives “incoming calls only.”

The goal is to identify people who move around in a pattern similar to Al Qaeda couriers who are used to pass communication and intelligence between the group’s senior leaders. The program tracked Zaidan because his movements and interactions with Al Qaeda and Taliban leaders matched a suspicious pattern—which is, it turns out, very similar to the pattern of journalists meeting with sources.

Ref: So, the NSA Has an Actual Skynet Program – Wired

The Cathedral of Computation

Here’s an exercise: The next time you hear someone talking about algorithms, replace the term with “God” and ask yourself if the meaning changes. Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers people have allowed to replace gods in their minds, even as they simultaneously claim that science has made us impervious to religion.

[…]Each generation, we reset a belief that we’ve reached the end of this chain of metaphors, even though history always proves us wrong precisely because there’s always another technology or trend offering a fresh metaphor. Indeed, an exceptionalism that favors the present is one of the ways that science has become theology.


The same could be said for data, the material algorithms operate upon. Data has become just as theologized as algorithms, especially “big data,” whose name is meant to elevate information to the level of celestial infinity. Today, conventional wisdom would suggest that mystical, ubiquitous sensors are collecting data by the terabyteful without our knowledge or intervention. Even if this is true to an extent, examples like Netflix’s altgenres show that data is created, not simply aggregated, and often by means of laborious, manual processes rather than anonymous vacuum-devices.

If algorithms aren’t gods, what are they instead? Like metaphors, algorithms are simplifications, or distortions. They are caricatures. They take a complex system from the world and abstract it into processes that capture some of that system’s logic and discard others. And they couple to other processes, machines, and materials that carry out the extra-computational part of their work.

Unfortunately, most computing systems don’t want to admit that they are burlesques. They want to be innovators, disruptors, world-changers, and such zeal requires sectarian blindness. The exception is games, which willingly admit that they are caricatures—and which suffer the consequences of this admission in the court of public opinion. Games know that they are faking it, which makes them less susceptible to theologization. SimCity isn’t an urban planning tool, it’s  a cartoon of urban planning. Imagine the folly of thinking otherwise! Yet, that’s precisely the belief people hold of Google and Facebook and the like.

Ref: The Cathedral of Computation – TheAtlantic

2014: A year of progress (Stop Killer Robots)

Spurred on by the campaign’s non-governmental organizations (NGOs) as well as by think tanks and academics, 2014 saw notable diplomatic progress and increased awareness in capitals around the world of the challenges posed by autonomous warfare,buttherewerefewsignalsthatnationalpolicyisanycloser to being developed. Only two nations have stated policy on autonomous weapons systems: a 2012 US Department of Defense directive permits the development and use of fully autonomous systems that deliver only non-lethal force, while the UK MinistryofDefencehasstated that it has “no plans to replace skilled military personnel with fully autonomous systems.”

Five nations—Cuba, Ecuador, Egypt, Pakistan, and the Holy See—have expressed support for the objective of a preemptive ban on fully autonomous weapons, but have yet to execute that commitment in law or policy. A number of nations have indicated support for the principle of human control over the selection of targets and use of force, indicating they see a need to draw the line at some point.


The year opened with a resolution by the European Parliament on 27 February on the use of armed drones that included a call to “ban the development, production and use of fully autonomous weapons which enable strikes to be carried out without human intervention.” Sponsored by the Greens/European Free Alliance group of Members of the European Parliament with cross-party support, the resolution was adopted by a vote of 534–49.

The first informal CCW meeting of experts held at the United Nations (UN) in Geneva on 13-16 May attracted “record attendance” with the participation of 86 states, UN agencies, the ICRC, and the Campaign to Stop Killer Robots. The campaign’s delegation contributed actively throughout the meeting, making statements in plenary, issuing briefing papers and reports, hosting four consecutive side events, and briefing media throughout. The chair and vice-chair of the International Committee for Robot Arms Control (ICRAC) gave expert presentations at the meeting, which ICRAC had urged be convened since 2009.

The 2014 experts meeting reviewed technical, legal, ethical, and operational questions relating to the emerging technology of lethal autonomous weapons systems, but did not take any decisions. Ambassador Jean-Hugues Simon-Michel of France provided a report of the meeting in his capacity as chair that summarized the main areas of interest and recommended further talks in 2015.


The report notes how experts and delegations described the potential for autonomous weapons systems to be “game changers” in military affairs,butobservedthereappearedtobelittle military interest in deploying fully autonomous weapons systems because of the need to retain human control and concerns over operationalrisks includingvulnerability to cyber attacks, lack of predictability, difficulties of adapting to a complex environment, and challenges of interoperability. Delegates also considered proliferation and the potential impact of autonomous weapons on international peace and security.

Delegates considered the impact of development of autonomous weapons systems on human dignity, highlighting the devolution of life and death decisions to a machine as a key ethical concern. Some asked if a machine could acquire capacities of moral reasoning and human judgment, which is the basis for respect of international humanitarian law principles and challenged the capacity of machine to respond to a moral dilemma.

There was acknowledgment that international humanitarian and human rights law applies to all new weapons but views were divided as to whether the weapons would be illegal under existing law or permitted in certain circumstances. The imperative of maintaining meaningful human control over targeting and attack decisions emerged as the primary point of common ground at the meeting.


Campaign representatives participated in discussions on autonomous weapons in 2014 convened by the Geneva Academy of International Humanitarian Law and Human Rights, which issued a briefing paper in Novemberonlegaldimensionsoftheissue, as well asattheWashingtonDC-based Center for New American Security, which began a project on “ethical autonomy” in 2014. Campaigners spoke at numerous academic events this year, including at Oxford University, University of California-Santa Barbara, and University of Pennsylvania Law School. They also presented at events convened by think tanks often in cooperation with government, such astheEUNon-ProliferationConsortium in Brussels and the UN-South Korea non-proliferation forum on Jeju Island. The campaign features in a Stockholm International Peace Research Institute (SIPRI) chapter on the “governance of autonomous weapons” included for the first time in the 2014 Yearbook edition.

Ref: 2014: A year of progress – Stop Killer Robots

How the Pentagon’s Skynet Would Automate War

Due to technological revolutions outside its control, the Department of Defense (DoD) anticipates the dawn of a bold new era of automated war within just 15 years. By then, they believe, wars could be fought entirely using intelligent robotic systems armed with advanced weapons.

Last week, US defense secretary Chuck Hagel announced the ‘Defense Innovation Initiative’—a sweeping plan to identify and develop cutting edge technology breakthroughs “over the next three to five years and beyond” to maintain global US “military-technological superiority.” Areas to be covered by the DoDprogrammeinclude robotics, autonomous systems,miniaturization, Big Data and advanced manufacturing, including 3D printing.


A key area emphasized by the Wells and Kadtke study is improving the US intelligence community’s ability to automatically analyze vast data sets without the need for human involvement.

Pointing out that “sensitive personal information” can now be easily mined from online sources and social media, they call for policies on “Personally Identifiable Information (PII) to determine the Department’s ability to make use of information from social media in domestic contingencies”—in other words, to determine under what conditions the Pentagon can use private information on American citizens obtained via data-mining of Facebook, Twitter, LinkedIn, Flickr and so on.

Their study argues that DoD can leverage “large-scale data collection” for medicine and society, through “monitoring of individuals and populations using sensors, wearable devices, and IoT [the ‘Internet of Things’]” which together “will provide detection and predictive analytics.” The Pentagon can build capacity for this “in partnership with large private sector providers, where the most innovative solutions are currently developing.”


Within this context of Big Data and cloud robotics, Kadtke and Wells enthuse that as unmanned robotic systems become more intelligent, the cheap manufacture of “armies of Kill Bots that can autonomously wage war” will soon be a reality. Robots could also become embedded in civilian life to perform “surveillance, infrastructure monitoring,policetelepresence, and homeland security applications.”


Perhaps the most disturbing dimension among the NDU study’s insights is the prospect that within the next decade, artificial intelligence (AI) research could spawn “strong AI”—or at least a form of “weak AI” that approximates some features of the former.

Strong AI should be able to simulate a wide range of human cognition, and include traits like consciousness, sentience, sapience, or self-awareness. Many now believe, Kadtke and Wells, observe, that “strong AI may be achieved sometime in the 2020s.”


Nearly half the people on the US government’s terrorism watch list of “known or suspected terrorists” have “no recognized terrorist group affiliation,” and more than half the victims of CIA drone-strikes over a single year were “assessed” as “Afghan, Pakistani and unknown extremists”—among others who were merely “suspected, associated with, or who probably” belonged to unidentified militant groups. Multiple studies show that a substantive number of drone strike victims are civilians—and a secret Obama administration memo released this summer under Freedom of Information reveals that thedroneprogramme authorizes the killing of civilians as inevitable collateral damage.

Indeed, flawed assumptions in the Pentagon’s classification systems for threat assessment mean that even “nonviolent political activists” might be conflated with potential ‘extremists’, who “support political violence” and thus pose a threat to US interests.


Ref: How the Pentagon’s Skynet Would Automate War – Motherboard