Now The Military Is Going To Build Robots That Have Morals

The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.

“Even though today’s unmanned systems are ‘dumb’ in comparison to a human counterpart, strides are being made quickly to incorporate more automation at a faster pace than we’ve seen before,” Paul Bello, director of the cognitive science program at the Office of Naval Research told Defense One. “For example, Google’s self-driving cars are legal and in-use in several states at this point. As researchers, we are playing catch-up trying to figure out the ethical and legal implications. We do not want to be caught similarly flat-footed in any kind of military domain where lives are at stake.”

“Even if such systems aren’t armed, they may still be forced to make moral decisions,” Bello said. For instance, in a disaster scenario, a robot may be forced to make a choice about whom to evacuate or treat first, a situation where a bot might use some sense of ethical or moral reasoning. “While the kinds of systems we envision have much broader use in first-response, search-and-rescue and in the medical domain, we can’t take the idea of in-theater robots completely off the table,” Bello said.

Some members of the artificial intelligence, or AI, research and machine ethics communities were quick to applaud the grant. “With drones, missile defines, autonomous vehicles, etc., the military is rapidly creating systems that will need to make moral decisions,” AI researcher Steven Omohundrotold Defense One. “Human lives and property rest on the outcomes of these decisions and so it is critical that they be made carefully and with full knowledge of the capabilities and limitations of the systems involved. The military has always had to define ‘the rules of war’ and this technology is likely to increase the stakes for that.”

 

Ref: Now The Military Is Going To Build Robots That Have Morals – DefenseOne

Robots Venture Capital A Venture Capital Firm Just Named An Algorithm To Its Board Of Directors

A Hong Kong VC fund has just appointed an algorithm to its board.

Deep Knowledge Ventures, a firm that focuses on age-related disease drugs and regenerative medicine projects, says the program, called VITAL, can make investment recommendations about life sciences firms by poring over large amounts of data.

Just like other members of the board, the algorithm gets to vote on whether the firm makes an investment in a specific company or not. The program will be the sixth member of DKV’s board.

 

Ref: A Venture Capital Firm Just Named An Algorithm To Its Board Of Directors – BusinessInsider

In Hiring, Algorithms Beat Instinct

You know your company inside out. You know the requirements of the position you need to fill. And now that HR has finished its interviews and simulations, you know the applicants, too—maybe even better than their friends do. Your wise and experienced brain is ready to synthesize the data and choose the best candidate for the job.

Instead, you should step back from the process. If you simply crunch the applicants’ data and apply the resulting analysis to the job criteria, you’ll probably end up with a better hire.

Humans are very good at specifying what’s needed for a position and eliciting information from candidates—but they’re very bad at weighing the results. Our analysis of 17 studies of applicant evaluations shows that a simple equation outperforms human decisions by at least 25%. The effect holds in any situation with a large number of candidates, regardless of whether the job is on the front line, in middle management, or (yes) in the C-suite.

 

Ref: In Hiring, Algorithms Beat Instinct – Harvard Business Review

Ethics of Sex Robots

We come to confusing areas when we start thinking about sentience and the ability to feel and experience emotions.

The robot’s form is what remains disconcerting, at least to me. Unlike a bloodless small squid stuffed into a plastic holder, this sex object actually resembles a whole human, along with that fake human having independent movement. Worse still are ideas raised by popular science-fiction regarding sentience – but for now, such concerns for artificial intelligence are far off (or perhaps impossible).

The idea that we can program something to “always consent” or “never refuse” is further discomforting to me. But we must wonder: how is it different to turning on an iPad? How is it different to the letter I type appearing on screen as I push these keys? Do we say the iPad or software is programmed to consent to my button pushing, swiping, clicking? No: We just assume a causal connection of “push button – get result”.

That’s the nature of tools. We don’t wonder about the hammer’s feelings being nailed, so why should we worry about a robot’s? Just because the robot has a human form doesn’t make it any less of a tool. It just has no property for feelings.

 

Ref: Robots and sex: creepy or cool? – TheGuardian

Human-level Performance in Face Verification is Surpassed by Algorithm

 

Face verification remains a challenging problem in very complex conditions with large variations such as pose, illumination, expression, and occlusions. This problem is exacerbated when we rely unrealistically on a single training data source, which is often insufficient to cover the intrinsically complex face variations. This paper proposes a principled multi-task learning approach based on Discriminative Gaussian Process Latent Variable Model, named GaussianFace, to enrich the diversity of training data. In comparison to existing methods, our model exploits additional data from multiple source-domains to improve the generalization performance of face verification in an unknown target-domain. Importantly, our model can adapt automatically to complex data distributions, and therefore can well capture complex face variations inherent in multiple sources. Extensive experiments demonstrate the effectiveness of the proposed model in learning from diverse data sources and generalize to unseen domain. Specifically, the accuracy of our algorithm achieves an impressive accuracy rate of 98.52% on the well-known and challenging Labeled Faces in the Wild (LFW) benchmark. For the first time, the human-level performance in face verification (97.53%) on LFW is surpassed.

 

Ref: Surpassing Human-Level Face Verification Performance on LFW with GaussianFace – Cornell University Lab

U.S. Views of Technology and the Future

Overall, most Americans anticipate that the technological developments of the coming half-century will have a net positive impact on society. Some 59% are optimistic that coming technological and scientific changes will make life in the future better, while 30% think these changes will lead to a future in which people are worse off than they are today.

But at the same time that many expect science to produce great breakthroughs in the coming decades, there are widespread concerns about some controversial technological developments that might occur on a shorter time horizon:

  • 66% think it would be a change for the worse if prospective parents could alter the DNA of their children to produce smarter, healthier, or more athletic offspring.
  • 65% think it would be a change for the worse if lifelike robots become the primary caregivers for the elderly and people in poor health.
  • 63% think it would be a change for the worse if personal and commercial drones are given permission to fly through most U.S. airspace.
  • 53% of Americans think it would be a change for the worse if most people wear implants or other devices that constantly show them information about the world around them. Women are especially wary of a future in which these devices are widespread.

Many Americans are also inclined to let others take the first step when it comes to trying out some potential new technologies that might emerge relatively soon.  The public is evenly divided on whether or not they would like to ride in a driverless car: 48% would be interested, while 50% would not. But significant majorities say that they are not interested in getting a brain implant to improve their memory or mental capacity (26% would, 72% would not) or in eating meat that was grown in a lab (just 20% would like to do this).

[…]

The legal and regulatory framework for operating non-military drones is currently the subject of much debate, but the public is largely unenthusiastic: 63% of Americans think it would be a change for the worse if “personal and commercial drones are given permission to fly through most U.S. airspace,” while 22% think it would be a change for the better. Men and younger adults are a bit more excited about this prospect than are women and older adults. Some 27% of men (vs. 18% of women), and 30% of 18–29 year olds (vs. 16% of those 65 and older) think this would be a change for the better. But even among these groups, substantial majorities (60% of men and 61% of 18-29 year olds) think it would be a bad thing if commercial and personal drones become much more prevalent in future years.

Countries such as Japan are already experimenting with the use of robot caregivers to help care for a rapidly aging population, but Americans are generally wary. Some 65% think it would be a change for the worse if robots become the primary caregivers to the elderly and people in poor health. Interestingly, opinions on this question are nearly identical across the entire age spectrum: young, middle aged, and older Americans are equally united in the assertion that widespread use of robot caregivers would generally be a negative development.

 

Ref: U.S. Views of Technology and the Future – Pew Research Center