Category Archives: T – war

Lies Detector U.S. Border

 

Since September 11, 2001, federal agencies have spent millions of dollars on research designed to detect deceptive behavior in travelers passing through US airports and border crossings in the hope of catching terrorists. Security personnel have been trained—and technology has been devised—to identify, as an air transport trade association representative once put it, “bad people and not just bad objects.” Yet for all this investment and the decades of research that preceded it, researchers continue to struggle with a profound scientific question: How can you tell if someone is lying?

That problem is so complex that no one, including the engineers and psychologists developing machines to do it, can be certain if any technology will work. “It fits with our notion of justice, somehow, that liars can’t really get away with it,” says Maria Hartwig, a social psychologist at John Jay College of Criminal Justice who cowrote a recent report on deceit detection at airports and border crossings. The problem is, as Hartwig explains it, that all the science says people are really good at lying, and it’s incredibly hard to tell when we’re doing it.

 
 

Ref: Deception Is Futile When Big Brother’s Lie Detector Turns Its Eyes on You – Wired

Algorithm Learns How to Revive Lost Languages

 

Like living things, languages evolve. Words mutate, sounds shift, and new tongues arise from old.

Charting this landscape is usually done through manual research. But now a computer has been taught to reconstruct lost languages using the sounds uttered by those who speak their modern successors.

The system was able to suggest how ancestor languages might have sounded and also identify which sounds were most likely to change. When the team compared the results with work done by human specialists, they found that over 85 per cent of suggestions were within a single character of the actual words.

 

Ref: Algorithm learns how to revive lost languages – NewScientist
Ref: Automated reconstruction of ancient languages using probabilistic models of sound change – PNAS

U.S. Cities Relying on Precog Software to Predict Murder

 

New crime-prediction software used in Maryland and Pennsylvania, and soon to be rolled out in the nation’s capital too, promises to reduce the homicide rate by predicting which prison parolees are likely to commit murder and therefore receive more stringent supervision.

The software aims to replace the judgments parole officers already make based on a parolee’s criminal record and is currently being used in Baltimore and Philadelphia.

Richard Berk, a criminologist at the University of Pennsylvania who developed the algorithm, claims it will reduce the murder rate and other crimes and could help courts set bail amounts as well as sentencing in the future.

“When a person goes on probation or parole they are supervised by an officer. The question that officer has to answer is ‘what level of supervision do you provide?’” Berk told ABC News. The software simply replaces that kind of ad hoc decision-making that officers already do, he says.

To create the software, researchers assembled a dataset of more than 60,000 crimes, including homicides, then wrote an algorithm to find the people behind the crimes who were more likely to commit murder when paroled or put on probation. Berk claims the software could identify eight future murderers out of 100.

The software parses about two dozen variables, including criminal record and geographic location. The type of crime and the age at which it was committed, however, turned out to be two of the most predictive variables.

Shawn Bushway, a professor of criminal justice at the State University of New York at Albany told ABC that advocates for inmate rights might view the use of an algorithm to increase supervision of a parolee as a form of harassment, especially when the software produced the inevitable false positives. He said it could result in “punishing people who, most likely, will not commit a crime in the future.”

 

Ref: U.S. Cities Relying on Precog Software to Predict Murder – Wired

Machines à Gouverner

In 1948 a Dominican friar, Père Dubarle, wrote a review of Norbert Wiener book Cybernetics. In this article, he introduces a very interesting word “machines à gouverner”. Père Dubarle warns us against potential risks of having blind faith towards new sciences (machines/computers in this case) because human processes can’t be predicted with “cold mathematics”.

One of the most fascinating prospects thus opened is that of the rational conduct of human affairs, and in particular of those which interest communities and seem to present a certain statistical regularity, such as the hu­man phenomena of the development of opinion. Can’t one imagine a machine to collect this or that type of information, as for example information on production and the market; and then to determine as a function of the average psychology of human beings, and of the ‘i quantities which it is possible to measure in a determined instance, what the most probable development of the situation might be? Can’t one even conceive a State ap­ paratus covering all systems of political decisions, either under a regime of many states distributed over the earth, or under the apparently much more simple regime of a human government of this planet? At present nothing prevents our thinking of this. We may dream of the time when the machine a gouverner may come to supply­ whether for good or evil – the present obvious inade­quacy of the brain when the latter is concerned with the customary machinery of politics.

At all events, human realities do not admit a sharp and certain determination, as numerical data of computa­tion do. They only admit the determination of their prob­ able values. A machine to treat these processes, and the problems which they put, must therefore undertake the sort of probabilistic, rather than deterministic thought, such as is .exhibited for example in modem computing machines . This makes its task more complicated, but does not render it impossible. The prediction machine which determines the efficacy of anti-aircraft fire is an example of this. Theoretically, time prediction is not im­ possible; neither is the determination of the most favor­ able decision, at least within certain limits. The possibility of playing machines such as the chess-playing machine is considered to establish this. For the human processes which constitute the object of government may be assimilated to games in the sense in which von Neu­ mann has studied them mathematically. Even though these games have an incomplete set of rules, there are other games with a very large number of players, where the data are extremely complex. The machines a gouv­erner will define the State as the best-informed player at each particular level; and the State is the only su­ preme co-ordinator of all partial decisions. These are enormous privileges; if they are acquired scientifically, they will permit the State under all circumstances to beat every player of a human game other than itself by offering this dilemma: either immediate ruin, or planned co-operation. This will be the consequences of the game itself without outside violence. The lovers of the best of worlds have something indeed to dream of!

Despite all this, and perhaps fortunately, the machine a gouverner is not ready for a very near tomorrow. For outside of the very serious problems which the volume of information to be collected and to be treated rapidly still put, the problems of the stability of prediction re­ main beyond what we can seriously dream of controlling. For human processes are assimilable to games with in­ completely defined rules, and above all, with the rules themselves functions of the time. The variation of the rules depends both on the effective detail of the situa­tions engendered by the game itself, and on the system of psychological reactions of the players in the face of the results obtained at each instant.

It may even be more rapid than these. A very good example of this seems to be given by what happened to the Gallup Poll in the 1948 election. All this not only tends to complicate the degree of the factors which in­fluence prediction, but perhaps to make radically sterile the mechanical manipulation of human situations. As far as one can judge, only two conditions here can guarantee stabilization in the mathematical sense of the term. These are, on the one hand, a sufficient ignorance on the part of the mass of the players exploited by a skilled player, who moreover may plan a method of paralyzing the consciousness of the masses; or on the other, suffi­cient good-will to allow one, for the sake of the stability of the game, to refer his decisions to one or a few players of the game who have arbitrary privileges. This is a hard lesson of cold mathematics, but it throws a certain light on the adventure of our century: hesitation between an indefinite turbulence of human affairs and the rise of a prodigious Leviathan. In comparison with this, Hobbes’ Leviathan was nothing but a pleasant joke. We are run­ning the risk nowadays of a great World State, where deliberate and conscious primitive injustice may be the only possible condition for the statistical happiness of the masses: a world worse than hell for every clear mind. Perhaps it would not be a bad idea for the teams at present creating cybernetics to add to their cadre of technicians, who have come from all horizons of science, some serious anthropologists, and perhaps a philosopher who has some curiosity as to world matters.

 

Ref: L’avènement de l’informatique et de la cybernétique. Chronique d’une rupture annoncée – Revue Futuribles

Software Predicts Criminal Behavior

 

 

New crime prediction software being rolled out in the nation’s capital should reduce not only the murder rate, but the rate of many other crimes as well.
Developed by Richard Berk, a professor at the University of Pennsylvania, the software is already used in Baltimore and Philadelphia to predict which individuals on probation or parole are most likely to murder and to be murdered.
In his latest version, the one being implemented in D.C., Berk goes even further, identifying the individuals most likely to commit crimes other than murder.
If the software proves successful, it could influence sentencing recommendations and bail amounts.

[…]

Beginning several years ago, the researchers assembled a dataset of more than 60,000 various crimes, including homicides. Using an algorithm they developed, they found a subset of people much more likely to commit homicide when paroled or probated. Instead of finding one murderer in 100, the UPenn researchers could identify eight future murderers out of 100.

 

Ref: Software Predicts Criminal Behavior – ABC News (via DarkGovernment)

Death by Algorithm : Which Terrorist should Disappear First?

 

 

The West Point Team have created an algorithm, called GREEDY_FRAGILE, it shows a greed of connection in between terrorists. The aim of the algorithm is to visualise who should be killed in order to weaken a network.

But could human lives be treated just as another mathematical problem?

The problem is that a math model, like a metaphor, is a simplification. This type of modeling came out of the sciences, where the behavior of particles in a fluid, for example, is predictable according to the laws of physics.

In so many Big Data applications, a math model attaches a crisp number to human behavior, interests and preferences. The peril of that approach, as in finance, was the subject of a recent book by Emanuel Derman, a former quant at Goldman Sachs and now a professor at Columbia University. Its title is “Models. Behaving. Badly.”

 

Ref: Death by Algorithm: West Point Code Shows Which Terrorists Should Disappear First – Wired
Ref: Shaping Operations to Attack Robust Terror Networks – United States Military Academy
Ref: Sure, Big Data Is Great. But So Is Intuition. – The New York Times

 

Game of Drones

And although Mr Karem was not involved in the decision to arm the Predator, he has no objection to the use of drones as weapons platforms. “At least people are now working on how to kill the minimum number of people on the other side,” he says. “The missiles on the Predator are way too capable. Weapons for UAVs are going to get smaller and smaller to avoid collateral damage.” – Abe Karim, The drone father

 

Ref: The Dronefather – The Economist

Automated Blackhawk

 

Importantly, the RASCAL was operating on the fly. “No prior knowledge of the terrain was used,” Matthew Whalley, the Army’s Autonomous Rotorcraft Project lead, told Dailytech.

The RASCAL is just the latest for a military that is serious about removing its soldiers from harm’s way and letting robots do the dirty work. Already 30 percent of all US military aircraft are drones. And the navy’s X-47B robotic fighter is well on course to become the first autonomous air vehicle to take off and land on an aircraft carrier. Just days ago it completed its first catapult takeoff (from the ground).

 

Ref: Automated Blackhawk Helicopter Completes First Flight Test – SingularityHub

A Human Will Always Decide When a Robot Kills You

Human rights groups and nervous citizens fear that technological advances in autonomy will slowly lead to the day when robots make that critical decision for themselves. But according to a new policy directive issued by a top Pentagon official, there shall be no SkyNet, thank you very much.

Here’s what happened while you were preparing for Thanksgiving: Deputy Defense Secretary Ashton Carter signed, on November 21, a series of instructions to “minimize the probability and consequences of failures” in autonomous or semi-autonomous armed robots “that could lead to unintended engagements,” starting at the design stage

 

Ref: Pentagon: A Human Will Always Decide When a Robot Kills You – Wired
Ref: Autonomy in Weapon Systems – Departement of Defense (via Cryptome)