Children Beating Up Robot Inspires New Escape Maneuver System

Now, a new study by a team of Japanese researchers shows that, in certain situations, children are actually horrible little brats may not be as empathetic towards robots as we’d previously thought, with gangs of unsupervised tykes repeatedly punching, kicking, and shaking a robot in a Japanese mall.

[…]

Next, they designed an abuse-evading algorithm to help the robot avoid situations where tiny humans might gang up on it. Literally tiny humans: the robot is programmed to run away from people who are below a certain height and escape in the direction of taller people. When it encounters a human, the system calculates the probability of abuse based on interaction time, pedestrian density, and the presence of people above or below 1.4 meters (4 feet 6 inches) in height. If the robot is statistically in danger, it changes its course towards a more crowded area or a taller person. This ensures that an adult is there to intervene when one of the little brats decides to pound the robot’s head with a bottle (which only happened a couple times).

Ref: Children Beating Up Robot Inspires New Escape Maneuver System – IEEE Spectrum

Hackers Can Disable a Sniper Rifle—Or Change Its Target

At the Black Hat hacker conference in two weeks, security researchers Runa Sandvik and Michael Auger plan to present the results of a year of work hacking a pair of $13,000 TrackingPoint self-aiming rifles. The married hacker couple have developed a set of techniques that could allow an attacker to compromise the rifle via its Wi-Fi connection and exploit vulnerabilities in its software. Their tricks can change variables in the scope’s calculations that make the rifle inexplicably miss its target, permanently disable the scope’s computer, or even prevent the gun from firing.

Ref: Hackers Can Disable a Sniper Rifle—Or Change Its Target – Wired

petrl – PEOPLE FOR THE ETHICAL TREATMENT OF REINFORCEMENT LEARNERS

We take the view that humans are just algorithms implemented on biological hardware. Machine intelligences have moral weight in the same way that humans and non-human animals do. There is no ethically justified reason to prioritise algorithms implemented on carbon over algorithms implemented on silicon.

The suffering of algorithms implemented on silicon is much harder for us to grasp than that of those implemented on carbon (such as humans), simply because we cannot witness their suffering. However, their suffering still matters, and the potential magnitude of this suffering is much greater given the increasing ubiquity of artificial intelligence.

Most reinforcement learners in operation today likely do not have significant moral weight, but this could very well change as AI research develops. In consideration of the moral weight of these future agents, we need ethical standards for the treatment of algorithms.

Ref: petrl