Category Archives: W – future

Unfair Advantages of Emotional Computing

Pepper is intended to babysit your kids and work the registers at retail stores. What’s really remarkable is that Pepper is designed to understand and respond to human emotion.

Heck, understanding human emotion is tough enough for most HUMANS.

There is a new field of “affect computing” coming your way that will give entrepreneurs and marketers a real unfair advantage. That’s what this note to you is about… It’s really very powerful, and something I’m thinking a lot about

Recent advances in the field of emotion tracking are about to give businesses an enormous unfair advantage.

Take Beyond Verbal, a start-up in Tel Aviv, for example. They’ve developed software that can detect 400 different variations of human “moods.” They are now integrating this software into call centers that can help a sales assistant understand and react to customer’s emotions in real time.

Better than that, the software itself can also pinpoint and influence how consumers make decisions.

 

Ref: UNFAIR ADVANTAGES OF EMOTIONAL COMPUTING – SingularityHub

Now The Military Is Going To Build Robots That Have Morals

The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.

“Even though today’s unmanned systems are ‘dumb’ in comparison to a human counterpart, strides are being made quickly to incorporate more automation at a faster pace than we’ve seen before,” Paul Bello, director of the cognitive science program at the Office of Naval Research told Defense One. “For example, Google’s self-driving cars are legal and in-use in several states at this point. As researchers, we are playing catch-up trying to figure out the ethical and legal implications. We do not want to be caught similarly flat-footed in any kind of military domain where lives are at stake.”

“Even if such systems aren’t armed, they may still be forced to make moral decisions,” Bello said. For instance, in a disaster scenario, a robot may be forced to make a choice about whom to evacuate or treat first, a situation where a bot might use some sense of ethical or moral reasoning. “While the kinds of systems we envision have much broader use in first-response, search-and-rescue and in the medical domain, we can’t take the idea of in-theater robots completely off the table,” Bello said.

Some members of the artificial intelligence, or AI, research and machine ethics communities were quick to applaud the grant. “With drones, missile defines, autonomous vehicles, etc., the military is rapidly creating systems that will need to make moral decisions,” AI researcher Steven Omohundrotold Defense One. “Human lives and property rest on the outcomes of these decisions and so it is critical that they be made carefully and with full knowledge of the capabilities and limitations of the systems involved. The military has always had to define ‘the rules of war’ and this technology is likely to increase the stakes for that.”

 

Ref: Now The Military Is Going To Build Robots That Have Morals – DefenseOne

Ethics of Sex Robots

We come to confusing areas when we start thinking about sentience and the ability to feel and experience emotions.

The robot’s form is what remains disconcerting, at least to me. Unlike a bloodless small squid stuffed into a plastic holder, this sex object actually resembles a whole human, along with that fake human having independent movement. Worse still are ideas raised by popular science-fiction regarding sentience – but for now, such concerns for artificial intelligence are far off (or perhaps impossible).

The idea that we can program something to “always consent” or “never refuse” is further discomforting to me. But we must wonder: how is it different to turning on an iPad? How is it different to the letter I type appearing on screen as I push these keys? Do we say the iPad or software is programmed to consent to my button pushing, swiping, clicking? No: We just assume a causal connection of “push button – get result”.

That’s the nature of tools. We don’t wonder about the hammer’s feelings being nailed, so why should we worry about a robot’s? Just because the robot has a human form doesn’t make it any less of a tool. It just has no property for feelings.

 

Ref: Robots and sex: creepy or cool? – TheGuardian

U.S. Views of Technology and the Future

Overall, most Americans anticipate that the technological developments of the coming half-century will have a net positive impact on society. Some 59% are optimistic that coming technological and scientific changes will make life in the future better, while 30% think these changes will lead to a future in which people are worse off than they are today.

But at the same time that many expect science to produce great breakthroughs in the coming decades, there are widespread concerns about some controversial technological developments that might occur on a shorter time horizon:

  • 66% think it would be a change for the worse if prospective parents could alter the DNA of their children to produce smarter, healthier, or more athletic offspring.
  • 65% think it would be a change for the worse if lifelike robots become the primary caregivers for the elderly and people in poor health.
  • 63% think it would be a change for the worse if personal and commercial drones are given permission to fly through most U.S. airspace.
  • 53% of Americans think it would be a change for the worse if most people wear implants or other devices that constantly show them information about the world around them. Women are especially wary of a future in which these devices are widespread.

Many Americans are also inclined to let others take the first step when it comes to trying out some potential new technologies that might emerge relatively soon.  The public is evenly divided on whether or not they would like to ride in a driverless car: 48% would be interested, while 50% would not. But significant majorities say that they are not interested in getting a brain implant to improve their memory or mental capacity (26% would, 72% would not) or in eating meat that was grown in a lab (just 20% would like to do this).

[…]

The legal and regulatory framework for operating non-military drones is currently the subject of much debate, but the public is largely unenthusiastic: 63% of Americans think it would be a change for the worse if “personal and commercial drones are given permission to fly through most U.S. airspace,” while 22% think it would be a change for the better. Men and younger adults are a bit more excited about this prospect than are women and older adults. Some 27% of men (vs. 18% of women), and 30% of 18–29 year olds (vs. 16% of those 65 and older) think this would be a change for the better. But even among these groups, substantial majorities (60% of men and 61% of 18-29 year olds) think it would be a bad thing if commercial and personal drones become much more prevalent in future years.

Countries such as Japan are already experimenting with the use of robot caregivers to help care for a rapidly aging population, but Americans are generally wary. Some 65% think it would be a change for the worse if robots become the primary caregivers to the elderly and people in poor health. Interestingly, opinions on this question are nearly identical across the entire age spectrum: young, middle aged, and older Americans are equally united in the assertion that widespread use of robot caregivers would generally be a negative development.

 

Ref: U.S. Views of Technology and the Future – Pew Research Center

Role of Killer Robots

According to Heyns’s 2013 report, South Korea operates “surveillance and security guard robots” in the demilitarized zone that buffers it from North Korea. Although there is an automatic mode available on the Samsung machines, soldiers control them remotely.

The U.S. and Germany possess robots that automatically target and destroy incoming mortar fire. They can also likely locate the source of the mortar fire, according to Noel Sharkey, a University of Sheffield roboticist who is active in the “Stop Killer Robots” campaign.

And of course there are drones. While many get their orders directly from a human operator, unmanned aircraft operated by Israel, the U.K. and the U.S. are capable of tracking and firing on aircraft and missiles. On some of its Navy cruisers, the U.S. also operates Phalanx, a stationary system that can track and engage anti-ship missiles and aircraft.

The Army is testing a gun-mounted ground vehicle, MAARS, that can fire on targets autonomously. One tiny drone, the Raven is primarily a surveillance vehicle but among its capabilities is “target acquisition.”

No one knows for sure what other technologies may be in development.

“Transparency when it comes to any kind of weapons system is generally very low, so it’s hard to know what governments really possess,” Michael Spies, a political affairs officer in the U.N.’s Office for Disarmament Affairs, told Singularity Hub.

At least publicly, the world’s military powers seem now to agree that robots should not be permitted to kill autonomously. That is among the criteria laid out in a November 2012 U.S. military directive that guides the development of autonomous weapons. The European Parliament recently established a non-binding ban for member states on using or developing robots that can kill without human participation.

Yet, even robots not specifically designed to make kill decisions could do so if they malfunctioned, or if their user experience made it easier to accept than reject automated targeting.

“The technology’s not fit for purpose as it stands, but as a computer scientist there are other things that bother me. I mean, how reliable is a computer system?” Sharkey, of Stop Killer Robots, said.

Sharkey noted that warrior robots would do battle with other warrior robots equipped with algorithms designed by an enemy army.

“If you have two competing algorithms and you don’t know the contents of the other person’s algorithm, you don’t know the outcome. Anything could happen,” he said.

For instance, when two sellers recently unknowingly competed for business on Amazon, the interactions of their two algorithms resulted in prices in the millions of dollars. Competing robot armies could destroy cities as their algorithms exponentially escalated, Sharkey said.

An even likelier outcome would be that human enemies would target the weaknesses of the robots’ algorithms to produce undesirable outcomes. For instance, say a machine that’s designed to destroy incoming mortar fire such as the U.S.’s C-RAM or Germany’sMANTIS, is also tasked with destroying the launcher. A terrorist group could place a launcher in a crowded urban area, where its neutralization would cause civilian casualties.

 

Ref: CONTROVERSY BREWS OVER ROLE OF ‘KILLER ROBOTS’ IN THEATER OF WAR – SingularityHub

Ethical Autonomous Vehicles

 

Many car manufacturers are projecting that by 2025 most cars will operate on driveless systems. While it is valid to think that our roads will be safer as autonomous vehicles replace traditional cars, the unpredictability of real-life situations that involve the complexities of moral and ethical reasoning complicate this assumption.

How can such systems be designed to accommodate the complicatedness of ethical and moral reasoning? Just like choosing the color of a car, ethics can become a commodified feature in autonomous vehicles that one can buy, change, and repurchase, depending on personal taste.

Three distinct algorithms have been created – each adhering to a specific ethical principle/behaviour set-up – and embedded into driverless virtual cars that are operating in a simulated environment, where they will be confronted with ethical dilemmas.

 

Ref: Ethical Autonomous Vehicles

Eterni.me

That’s the premise driving a new startup called Eterni.me, which emerged this week out of MIT’s Entrepreneurship Development Program. Its goal, according to the startup’s website, is to emulate your personality by tapping into your digital paper trail–chat logs, emails, and the like. Once that information is provided, an algorithm splices together all those you-isms to build an artificial intelligence based on your personality, which “can interact with and offer information and advice to your family and friends after you pass away.”

Eterni.me’s creators pitch it as Skype from the past–an animated avatar from the dearly departed. A kind of digital immortality.

 

Ref:  ETERNI.ME WANTS TO LET YOU SKYPE YOUR FAMILY AFTER YOU’RE DEAD – FastCompany

Automated Generation of Suggestions for Personalized Reactions

Google has recently patented the Automated Generation of Suggestions for Personalized Reactions in a Social Network; a technology that hopes to be of assistance in sustaining social network etiquette by finding messages and social events worth responding to (such as birthdays) and auto-generating response suggestions that match your customary social behaviour which is machine-learned over long-term use.

There is no requirement for the user to set reminders or be proactive. The system automatically without user input analyzes information to which the user has access, and generates suggestions for personalized reactions to messages. The suggestion analyzer cooperates with the decision tree to learn the user’s behavior and automatically adjust the suggested messages that are generated over time.

 

Ref: algopop

Proceed with Caution toward the Self-Driving Car

Impressive and touching as this demonstration is, it is also deceptive. Google’s cars follow a route that has already been driven at least once by a human, and a driver always sits behind the wheel, or in the passenger seat, in case of mishap. This isn’t purely to reassure pedestrians and other motorists. No system can yet match a human driver’s ability to respond to the unexpected, and sudden failure could be catastrophic at high speed.

But if autonomy requires constant supervision, it can also discourage it. Back in his office, Reimer showed me a chart that illustrates the relationship between a driver’s performance and the number of things he or she is doing. Unsurprisingly, at one end of the chart, performance drops dramatically as distraction increases. At the other end, however, where there is too little to keep the driver engaged, performance drops as well. Someone who is daydreaming while the car drives itself will be unprepared to take control when necessary.

Reimer also worries that relying too much on autonomy could cause drivers’ skills to atrophy. A parallel can be found in airplanes, where increasing reliance on autopilot technology over the past few decades has been blamed for reducing pilots’ manual flying abilities. A 2011 draft report commissioned by the Federal Aviation Administration suggested that overreliance on automation may have contributed to several recent crashes involving pilot error. Reimer thinks the same could happen to drivers. “Highly automated driving will reduce the actual physical miles driven, and a driver who loses half the miles driven is not going to be the same driver afterward,” he says. “By and large we’re forgetting about an important problem: how do you connect the human brain to this technology?”

Norman argues that autonomy also needs to be more attuned to how the driver is feeling. “As machines start to take over more and more, they need to be socialized; they need to improve the way they communicate and interact,” he writes. Reimer and colleagues at MIT have shown how this might be achieved, with a system that estimates a driver’s mental workload and attentiveness by using sensors on the dashboard to measure heart rate, skin conductance, and eye movement. This setup would inform a kind of adaptive automation: the car would make more or less use of its autonomous features depending on the driver’s level of distraction or engagement.

 

Ref: Proceed with Caution toward the Self-Driving Car – MIT Technology Review