Category Archives: T – domestic

Self-driving cars: safer, but what of their morals

It’s relatively easy to write computer code that directs the car how to respond to a sudden dilemma. The hard part is deciding what that response should be.

“The problem is,who’sdeterminingwhatwewant?” asks Jeffrey Miller, a University of Southern Californiaprofessorwhodevelopsdriverlessvehiclesoftware. “You’re not going to have 100 percent buy-in that says, ‘Hit the guy on the right.'”

Companiesthataretestingdriverlesscarsarenotfocusingon these moral questions.

Thecompanymostaggressivelydevelopingself-drivingcarsisn’tacarmakeratall. Google has invested heavily in the technology, driving hundreds of thousands of miles on roads and highways in tricked-out Priuses and Lexus SUVs. Leaders at the Silicon Valley giant have said they want to get the technology to the public by 2017.

For now, Google is focused on mastering the most common driving scenarios, programming the cars to drive defensively in hopes of avoiding the rare instances when an accident is truly unavoidable.

“People are philosophizing about it, but the question about real-world capability and real-world events that can affect us, we really haven’t studied that issue,” said Ron Medford, the director of safety for Google’s self-driving car project.

[…]

Technological advances will only add to the complexity. Especially when in-car sensors become so acute they can, for example, differentiate between a motorcyclist wearing a helmet and a companion riding without one. If a collision is inevitable, should the car hit the person with a helmet because the injury risk might be less? But that would penalize the person who took extra precautions.

Lin said he has discussed the ethics of driverlesscarswithGoogleas well as automakers includingTesla, Nissan and BMW. As far as he knows, only BMW has formed an internal group to study the issue.

Uwe Higgen, head of BMW’s group technology office in Silicon Valley, said the automaker has brought together specialists in technology, ethics, social impact, and the law to discuss a range of issues related to carsthatdoever-moredrivinginsteadof people.

“This is a constant process going forward,” Higgen said.

 

Ref: Self-driving cars: safer, but what of their morals – HuffingtonPost

How Facebook Knows You Better Than Your Friends Do

This week, researchers from the University of Cambridge and Stanford University released a study indicating that Facebook may be better at judging people’s personalities than their closest friends, their spouses, and in some cases, even themselves. The study compared people’s Facebook “Likes” to their own answers in a personality questionnaire, as well as the answers provided by their friends and family, and found that Facebook outperformed any human, no matter their relation to the subjects.

[…]

The researchers began with a 100-item personality questionnaire that went viral after David Stillwell,apsychometricsprofessoratCambridge, posted it on Facebook back in 2007. Respondents answered questions that were meant to root out five key personality traits: openness, conscientiousness, extraversion, agreeableness, and neuroticism. Based on that survey, the researchers scored each respondent in all five traits.

Then, the researchers created an algorithm and fed it with everyrespondent’spersonalityscores, as well as their “Likes,” to which subjects voluntarily gave researchers access. The researchers only included “Likes”thatrespondentssharedwithat least 20 other respondents. That enabled the model to connect certain “Likes” to certain personality traits. If, for instance, several people who liked Snooki on Facebook also scored high in the extroverted category, the system would learn that Snooki lovers are more outgoing. The more “Likes” the system saw, the better its judgment became.

In the end, the researchers found that with information on just ten Facebook “Likes,” the algorithm was more accurate than the average person’s colleague. With 150 “Likes,” it could outsmart people’s families, and with 300 “Likes,”itcouldbestaperson’sspouse.

[…]

While the researchers admit the results were surprising, they say there’s good reason for it. For starters, computers don’t forget. While our judgment of people may change based on our most recent — or most dramatic — interactions with them, computers give a person’s entire history equal weight. Computers alsodon’thaveexperiencesoropinionsof their own. They’re not limited by their own cultural references, and they don’t find certain personality traits, likes, or interests good or bad. “Computers don’t understand that certain personalities are more socially desirable,” Kosinski says. “Computers don’t like any of us.”

 

How Facebook Knows You Better Than Your Friends Do – Wired

When Will We Let Go and Let Google Drive Us?

According to Templeton, regulators and policymakers are proving more open to the idea than expected—a number of US states have okayed early driverless cars for public experimentation, along with Singapore, India, Israel, and Japan—but earning the general public’s trust may be a more difficult battle to win.

No matter how many fewer accidents occur due to driverless cars, there may well be a threshold past which we still irrationally choose human drivers over them. That is, we may hold robots to a much higher standard than humans.

This higher standard comes at a price. “People don’t want to be killed by robots,” Templeton said. “They want to be killed by drunks.”

It’s an interesting point—assuming the accident rate is nonzero (and it will be), how many accidents are we willing to tolerate in driverless cars, and is that number significantly lower than the number we’re willing to tolerate with human drivers?

Let’s say robot cars are shown to reduce accidents by 20%. They could potentially prevent some 240,000 accidents (using Templeton’s global number). That’s a big deal. And yet if (fully) employed, they would still cause nearly a million accidents a year. Who would trust them? And at what point does that trust kick in? How close to zero accidents does it have to get?

And it may turn out that the root of the problem lies not with the technology but us.

Ref: Summit Europe: When Will We Let Go and Let Google Drive Us? – SingularityHub

CyberSyn – The origin of the Big Data Nation

 

That was a challenge: the Chilean government was running low on cash and supplies; the United States, dismayed by Allende’s nationalization campaign, was doing its best to cut Chile off. And so a certain amount of improvisation was necessary. Four screens could show hundreds of pictures and figures at the touch of a button, delivering historical and statistical information about production—the Datafeed—but the screen displays had to be drawn (and redrawn) by hand, a job performed by four young female graphic designers. […] In addition to the Datafeed, there was a screen that simulated the future state of the Chilean economy under various conditions. Before you set prices, established production quotas, or shifted petroleum allocations, you could see how your decision would play out.

One wall was reserved for Project Cyberfolk, an ambitious effort to track the real-time happiness of the entire Chilean nation in response to decisions made in the op room. Beer built a device that would enable the country’s citizens, from their living rooms, to move a pointer on a voltmeter-like dial that indicated moods ranging from extreme unhappiness to complete bliss. The plan was to connect these devices to a network—it would ride on the existing TV networks—so that the total national happiness at any moment in time could be determined. The algedonic meter, as the device was called (from the Greekalgos, “pain,” and hedone, “pleasure”), would measure only raw pleasure-or-pain reactions to show whether government policies were working.

[…]

“The on-line control computer ought to be sensorily coupled to events in real time,” Beer argued in a 1964 lecture that presaged the arrival of smart, net-connected devices—the so-called Internet of Things. Given early notice, the workers could probably solve most of their own problems. Everyone would gain from computers: workers would enjoy more autonomy while managers would find the time for long-term planning. For Allende, this was good socialism. For Beer, this was good cybernetics.

[…]

Suppose that the state planners wanted the plant to expand its cooking capacity by twenty per cent. The modelling would determine whether the target was plausible. Say the existing boiler was used at ninety per cent of capacity, and increasing the amount of canned fruit would mean exceeding that capacity by fifty per cent. With these figures, you could generate a statistical profile for the boiler you’d need. Unrealistic production goals, overused resources, and unwise investment decisions could be dealt with quickly. “It is perfectly possible . . . to capture data at source in real time, and to process them instantly,” Beer later noted. “But we do not have the machinery for such instant data capture, nor do we have the sophisticated computer programs that would know what to do with such a plethora of information if we had it.”

Today, sensor-equipped boilers and tin cans report their data automatically, and in real time. And, just as Beer thought, data about our past behaviors can yield useful predictions. Amazon recently obtained a patent for “anticipatory shipping”—a technology for shipping products before orders have even been placed. Walmart has long known that sales of strawberry Pop-Tarts tend to skyrocket before hurricanes; in the spirit of computer-aided homeostasis, the company knows that it’s better to restock its shelves than to ask why.

[…]

Flowers suggests that real-time data analysis is allowing city agencies to operate in a cybernetic manner. Consider the allocation of building inspectors in a city like New York. If the city authorities know which buildings have caught fire in the past and if they have a deep profile for each such building—if, for example, they know that such buildings usually feature illegal conversions, and their owners are behind on paying property taxes or have a history of mortgage foreclosures—they can predict which buildings are likely to catch fire in the future and decide where inspectors should go first.

[…]

The aim is to replace rigid rules issued by out-of-touch politicians with fluid and personalized feedback loops generated by gadget-wielding customers. Reputation becomes the new regulation: why pass laws banning taxi-drivers from dumping sandwich wrappers on the back seat if the market can quickly punish such behavior with a one-star rating? It’s a far cry from Beer’s socialist utopia, but it relies on the same cybernetic principle: collect as much relevant data from as many sources as possible, analyze them in real time, and make an optimal decision based on the current circumstances rather than on some idealized projection.

[…]

It’s suggestive that Nest—the much admired smart thermostat, which senses whether you’re home and lets you adjust temperatures remotely—now belongs to Google, not Apple. Created by engineers who once worked on the iPod, it has a slick design, but most of its functionality (like its ability to learn and adjust to your favorite temperature by observing your behavior) comes from analyzing data, Google’s bread and butter. The proliferation of sensors with Internet connectivity provides a homeostatic solution to countless predicaments. Google Now, the popular smartphone app, can perpetually monitor us and (like Big Mother, rather than like Big Brother) nudge us to do the right thing—exercise, say, or take the umbrella.

Companies like Uber, meanwhile, insure that the market reaches a homeostatic equilibrium by monitoring supply and demand for transportation. Google recently acquired the manufacturer of a high-tech spoon—the rare gadget that is both smart and useful—to compensate for the purpose tremors that captivated Norbert Wiener. (There is also a smart fork that vibrates when you are eating too fast; “smart” is no guarantee against “dumb.”) The ubiquity of sensors in our cities can shift behavior: a new smart parking system in Madrid charges different rates depending on the year and the make of the car, punishing drivers of old, pollution-prone models. Helsinki’s transportation board has released an Uber-like app, which, instead of dispatching an individual car, coördinates multiple requests for nearby destinations, pools passengers, and allows them to share a much cheaper ride on a minibus.

[…]

For all its utopianism and scientism, its algedonic meters and hand-drawn graphs, Project Cybersyn got some aspects of its politics right: it started with the needs of the citizens and went from there. The problem with today’s digital utopianism is that it typically starts with a PowerPoint slide in a venture capitalist’s pitch deck. As citizens in an era of Datafeed, we still haven’t figured out how to manage our way to happiness. But there’s a lot of money to be made in selling us the dials.

 

Ref: The Planning Machine – The NewYorker

 

Robot Cars With Adjustable Ethics Settings

So why not let the user select the car’s “ethics setting”? The way this would work is one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible.

Plus, with an adjustable ethics dial set by the customer, the manufacturer presumably can’t be blamed for hard judgment calls, especially in no-win scenarios, right? In one survey, 44 percent of the respondents preferred to have a personalized ethics setting, while only 12 percent thought the manufacturer should predetermine the ethical standard. So why not give customers what they want?

[…]

So, an ethics setting is not a quick workaround to the difficult moral dilemma presented by robotic cars. Other possible solutions to consider include limiting manufacturer liability by law, similar to legal protections for vaccine makers, since immunizations are essential for a healthy society, too. Or if industry is unwilling or unable to develop ethics standards, regulatory agencies could step in to do the job—but industry should want to try first.

With robot cars, we’re trying to design for random events that previously had no design, and that takes us into surreal territory. Like Alice’s wonderland, we don’t know which way is up or down, right or wrong. But our technologies are powerful: they give us increasing omniscience and control to bring order to the chaos. When we introduce control to what used to be only instinctive or random—when we put God in the machine—we create new responsibility for ourselves to get it right.

 

Ref: Here’s a Terrible Idea: Robot Cars With Adjustable Ethics Settings – Wired

Unfair Advantages of Emotional Computing

Pepper is intended to babysit your kids and work the registers at retail stores. What’s really remarkable is that Pepper is designed to understand and respond to human emotion.

Heck, understanding human emotion is tough enough for most HUMANS.

There is a new field of “affect computing” coming your way that will give entrepreneurs and marketers a real unfair advantage. That’s what this note to you is about… It’s really very powerful, and something I’m thinking a lot about

Recent advances in the field of emotion tracking are about to give businesses an enormous unfair advantage.

Take Beyond Verbal, a start-up in Tel Aviv, for example. They’ve developed software that can detect 400 different variations of human “moods.” They are now integrating this software into call centers that can help a sales assistant understand and react to customer’s emotions in real time.

Better than that, the software itself can also pinpoint and influence how consumers make decisions.

 

Ref: UNFAIR ADVANTAGES OF EMOTIONAL COMPUTING – SingularityHub

Facebook’s Massive-Scale Emotional Contagion Experiment

Facebook researchers have published a paper documenting a huge social experiment carried out on 689,003 users without their knowledge. The experiment was to prove that emotional states can be transferred to others via emotional contagion. They proved this by manipulating different user’s newsfeed to be more positive or more negative and then measuring the emotional state of the user afterwards by analysing their subsequent status updates.

we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred.

They demonstrate how influential the newsfeed algorithm can be in manipulating a person’s mood, and even test tweaking the algorithm to deliver more emotional content with hope that it would be more engaging.

 

Ref: Facebook’s massive-scale emotional contagion experiment – Algopop

Ethics of Sex Robots

We come to confusing areas when we start thinking about sentience and the ability to feel and experience emotions.

The robot’s form is what remains disconcerting, at least to me. Unlike a bloodless small squid stuffed into a plastic holder, this sex object actually resembles a whole human, along with that fake human having independent movement. Worse still are ideas raised by popular science-fiction regarding sentience – but for now, such concerns for artificial intelligence are far off (or perhaps impossible).

The idea that we can program something to “always consent” or “never refuse” is further discomforting to me. But we must wonder: how is it different to turning on an iPad? How is it different to the letter I type appearing on screen as I push these keys? Do we say the iPad or software is programmed to consent to my button pushing, swiping, clicking? No: We just assume a causal connection of “push button – get result”.

That’s the nature of tools. We don’t wonder about the hammer’s feelings being nailed, so why should we worry about a robot’s? Just because the robot has a human form doesn’t make it any less of a tool. It just has no property for feelings.

 

Ref: Robots and sex: creepy or cool? – TheGuardian

I Didn’t Tell Facebook I’m Engaged, So Why Is It Asking About My Fiancé?

I keep going back to the way Jaron Lanier puts it in You Are Not a Gadget: “Life is turned into a databasebased on [a] philosophical mistake, which is the belief that computers can presently represent human thought or human relationships. These are things computers cannot currently do.” I hesitate to sum up such a deeply personal and important fact into a data point in a profile field. Zadie Smith was similarly inspired by Lanier’s words, and described how personhood as represented online is somehow lacking: “When a human being becomes a set of data on a website like Facebook, he or she is reduced. Everything shrinks. Individual character. Friendships. Language. Sensibility. In a way it’s a transcendent experience: we lose our bodies, our messy feelings, our desires, our fears.”

I have no illusions about what Facebook has figured out about me from my activity, pictures, likes, and posts. Friends have speculated about how algorithms might effectively predict hook-ups or dating patterns based on bursts of “Facebook stalking” activity (you know you are guilty of clicking through hundreds of tagged pictures of your latest crush). David Kilpatrick uncovered that Facebook “could determine with about 33 percent accuracy who a user was going to be in a relationship with a week from now.” And based on extensive networks of gay friends, MIT’s Gaydar claims to be able to out those who refrain from listing their sexual orientation on the network. When I first turned on Timeline, I discovered Facebook had correctly singled out that becoming friends with Nick was a significant event of 2007 (that’s when we met and first started dating, and appropriately enough, part of why he joined Facebook).

 

Ref: I Didn’t Tell Facebook I’m Engaged, So Why Is It Asking About My Fiancé? – TheAtlantic

 

Eterni.me

That’s the premise driving a new startup called Eterni.me, which emerged this week out of MIT’s Entrepreneurship Development Program. Its goal, according to the startup’s website, is to emulate your personality by tapping into your digital paper trail–chat logs, emails, and the like. Once that information is provided, an algorithm splices together all those you-isms to build an artificial intelligence based on your personality, which “can interact with and offer information and advice to your family and friends after you pass away.”

Eterni.me’s creators pitch it as Skype from the past–an animated avatar from the dearly departed. A kind of digital immortality.

 

Ref:  ETERNI.ME WANTS TO LET YOU SKYPE YOUR FAMILY AFTER YOU’RE DEAD – FastCompany