As many who study technology and the issues of borders know, drones in particular have become the weapon of choice, for crossing borders and carrying out undeclared war. These drones and the technology they employ, are playing an increasing role in world politics and in particular the military industrial complexes in the United States and increasingly worldwide.
As lobbyist work to fund more military robots and we are on the cusp of autonomous drones, which can algorithmically come to decide if a person is an “enemy combatant” of not, this work critiques the businesses such as IRobot (producer of military robots and the domestic Roomba vacuum cleaners) with the drone manufacturers General Atomics. The work questions and challenges the act of continuous war and the affect on populations especially in regions targeted such as Pakistan, Somalia and Yemen where the Bureau of Investigative Journalism (http://www.thebureauinvestigates.com/) out of the United Kingdom, that over a nine year period, out of 372 flights 400 civilians were confirmed dead, 94 of them children.
This work questions the notion of borders, where you can have a few countries or businesses lobbying governments to purchase and use new technologies, that also fundamentally challenge the notion of national autonomy and borders. The work is itself an autonomous robot, as it uses the intelligence programmed by the artist, who has highjacked thee digital programming and logic of the Roomba vacuum cleaner, which shares many algorithmic similarities to military robots. It conflates the land of other countries with the terrain of your living room (home) and seeks to join and help others understand the relationships between domestic consumer goods and the military industrial complexes, which increasingly manipulate, control and create foreign policy, through military robotics and autonomous killing machines.
Ref: Drone Eat Drone: American Scream, Ken Rinaldo – AntiAtlas
I keep going back to the way Jaron Lanier puts it in You Are Not a Gadget: “Life is turned into a database…based on [a] philosophical mistake, which is the belief that computers can presently represent human thought or human relationships. These are things computers cannot currently do.” I hesitate to sum up such a deeply personal and important fact into a data point in a profile field. Zadie Smith was similarly inspired by Lanier’s words, and described how personhood as represented online is somehow lacking: “When a human being becomes a set of data on a website like Facebook, he or she is reduced. Everything shrinks. Individual character. Friendships. Language. Sensibility. In a way it’s a transcendent experience: we lose our bodies, our messy feelings, our desires, our fears.”
I have no illusions about what Facebook has figured out about me from my activity, pictures, likes, and posts. Friends have speculated about how algorithms might effectively predict hook-ups or dating patterns based on bursts of “Facebook stalking” activity (you know you are guilty of clicking through hundreds of tagged pictures of your latest crush). David Kilpatrick uncovered that Facebook “could determine with about 33 percent accuracy who a user was going to be in a relationship with a week from now.” And based on extensive networks of gay friends, MIT’s Gaydar claims to be able to out those who refrain from listing their sexual orientation on the network. When I first turned on Timeline, I discovered Facebook had correctly singled out that becoming friends with Nick was a significant event of 2007 (that’s when we met and first started dating, and appropriately enough, part of why he joined Facebook).
Ref: I Didn’t Tell Facebook I’m Engaged, So Why Is It Asking About My Fiancé? – TheAtlantic
In an apparent move to feed its smart-hardware ambitions, Google has bought an artificial intelligence startup, DeepMind, for somewhere in the ballpark of $500 million. Considering all of the data Google sifts through, and the fact that it might be getting into robotics, it’s not completely absurd that they’d want some software to give a robotic helping hand. (Facebook apparently wanted the company, too, and they’ve already made moves to wrangle their own sprawling web of information.) But the other part of this story is a little stranger: the deal reportedly came under the condition that Google create an “ethics board” for the project.
Google has set up an ethics board to oversee its work in artificial intelligence. The search giant has recently bought several robotics companies, along with Deep Mind, a British firm creating software that tries to help computers think like humans. One of its founders warned artificial intelligence is ‘number 1 risk for this century,’ and believes it could play a part in human extinction.
The ethics board, revealed by web site The Information, is to ensure the projects are not abused.
‘Google has agreed to establish an ethics board to ensure the artificial intelligence technology isn’t abused, according to two people familiar with the deal,’ said The Information, which revealed the news. The DeepMind-Google ethics board is set to create a series of rules and restrictions over the use of the technology.
Ref: Google Buys AI Startup, Hires Ethics Board To Oversee It – PopSci
Ref: Google sets up artificial intelligence ethics board to curb the rise of the robots – DailyMail
Last week, a Christian college in Matthews, North Carolina unveiled something unprecedented: a humanoid robot whose sole mission is to explore the ethical and theological ramifications of robotics.
“When the time comes for including or incorporating humanoid robots into society, the prospect of a knee-jerk kind of reaction from the religious community is fairly likely, unless there’s some dialogue that starts happening, and we start examining the issue more closely,” says Kevin Staley, an associate professor of theology at SES. Staley pushed for the purchase of the bot, and plans to use it for courses at the college, as well as in presentations around the country. The specific reaction Staley is worried about is a more extreme version of the standard, secular creep factor associated with many robots.
That’s oversimplifying Staley’s plans for his NAO, though not by much. Despite his desire to steer both religious and secular communities away from an assumption of evil among humanoid bots, his current stance is one of extreme caution. “I think it would be a mistake to just, carte-blanche, say it’s like any other tech, and adopt it and deal with the consequences as they happen,” says Staley. The theological danger, he believes, is in substituting robots for people in social and emotional interactions—a more spiritual variation on concerns about offloading eldercare to robots, or developing machines can act as friends or even lovers. “Ultimately, the end and purpose of human beings is to be in a restored, full and intended, right relationship with God,” says Staley. Engaging too closely with bots might be worse than simply wasting time and energy on an unfeeling machine. He believes it could weaken humanity’s connection with one another, and, by association, God.
Ref: Apocalypse NAO: Are Robots Threatening Your Immortal Soul? – PopSci
Ref: Seminary buys robot to study the ethics of technology – RNS