Category Archives: T – tech

AI Has Arrived, and That Really Worries the World’s Brightest Minds

Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.

AI problems that seemed nearly unassailable justafewyears ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselvestoidentifycat videos. Robot dogs can now walk very much like their living counterparts.

“Things like computer vision are starting to work; speech recognition is starting to work There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AIethicistwho was at the event with Musk. “And that’s making it more urgent to look at this issue.”

Given this rapid clip, Musk and others are calling on those building these products to carefully consider the ethical implications. At the Puerto Rico conference, delegates signed an open letter pledging to conduct AI research for good, while “avoiding potential pitfalls.” Musk signed the letter too. “Here are all these leading AI researchers saying that AI safety is important,” Musk said yesterday. “I agree with them.”

[…]

Deciding the dosanddon’tsofscientificresearch is the kind of baseline ethical work that molecular biologists did during the 1975 Asilomar Conference on Recombinant DNA, where they agreed on safety standards designedtopreventmanmade genetically modified organisms from posing a threat to the public. The Asilomar conference had a much more concrete result than the Puerto Rico AI confab.

 

Ref: AI Has Arrived, and That Really Worries the World’s Brightest Minds – Wired

Artificial General Intelligence (AGI)

But how important is self-awareness, really, in creating an artificial mind on par with ours? According to quantum computing pioneer and Oxford physicist David Deutsch, not very.

In an excellent article in Aeon, Deutsch explores why artificial general intelligence (AGI) must be possible, but hasn’t yet been achieved. He calls it AGI to emphasize that he’s talking about a mind like ours, that can think and feel and reason about anything, as opposed to a complex computer program that’s very good at one or a few human-like tasks.

Simply put, his argument for why AGI is possible is this: Since our brains are made of matter, it must be possible, in principle at least, to recreate the functionality of our brains using another type of matter — specifically circuits.

As for Skynet’s self-awareness, Deutsch writes:

That’s just another philosophical misconception, sufficient in itself to block any viable approach to AGI. The fact is that present-day software developers could straightforwardly program a computer to have ‘self-awareness’ in the behavioural sense — for example, to pass the ‘mirror test’ of being able to use a mirror to infer facts about itself — if they wanted to. As far as I am aware, no one has done so, presumably because it is a fairly useless ability as well as a trivial one.

[…]

If we really want to create artificial intelligence, we have to understand what it is we’re trying to create. Deutsch persuasively argues that, as long as we’re focused on self-awareness, we miss out on understanding how our brains actually work, stunting our ability to create artificially intelligent machines.

What matters, Deutsch argues, is “the ability to create new explanations,” to generate theories about the world and all its particulars. In contrast with this, the idea that self-awareness — let alone real intelligence — will spontaneously emerge from a complex computer network is not just science fiction. It’s pure fantasy.

 

Ref: Terminator is Wrong about AI Self-Awareness – BusinessInsider

Why Autonomous Vehicles Will Still Crash

To put it simply, “in a dynamic environment, one has a limited time only to make a motion decision. One has to globally reason about the future evolution of the environment and do so with an appropriate time horizon.

So, basically, in order to have absolute safety, a car has to literally know everything that is about to happen and has to have enough time to be able to adjust for the movement of everyone and everything else. If it doesn’t, there’s eventually going to be a situation in which there’s no time to react—even for a computer.

“If you could make sure the car won’t break or your [car’s] decisions are 100 percent accurate, even if you have the perfect car that works perfectly, in the real world there are always unknown moving obstacles,” Fraichard told me. “Even if you’re some kind of god, it’s impossible. It’s always possible to find situations where a collision will happen.”

 

Ref: Driverless Cars Can Never Be Crashproof, Physics Says – Vice

Race to Develop AI

 

The latest Silicon Valley arms race is a contest to build the best artificial brains. Facebook, Google and other leading tech companies are jockeying to hire top scientists in the field of artificial intelligence, while spending heavily on a quest to make computers think more like people.

They’re not building humanoid robots — not yet, anyway. But a number of tech giants and startups are trying to build computer systems that understand what you want, perhaps before you knew you wanted it.

 

Ref: Google, Facebook and other tech companies race to develop artificial intelligence – MercuryNews

The Trick That Makes Google’s Self-Driving Cars Work

The key to Google’s success has been that these cars aren’t forced to process an entire scene from scratch. Instead, their teams travel and map each road that the car will travel. And these are not any old maps. They are not even the rich, road-logic-filled maps of consumer-grade Google Maps.

[…]

Google has created a virtual world out of the streets their engineers have driven. They pre-load the data for the route into the car’s memory before it sets off, so that as it drives, the software knows what to expect.

“Rather than having to figure out what the world looks like and what it means from scratch every time we turn on the software, we tell it what the world is expected to look like when it is empty,” Chatham continued. “And then the job of the software is to figure out how the world is different from that expectation. This makes the problem a lot simpler.”

[…]

All this makes sense within the broader context of Google’s strategy. Google wants to make the physical world legible to robots, just as it had to make the web legible to robots (or spiders, as they were once known) so that they could find what people wanted in the pre-Google Internet of yore.

 

Ref: The Trick That Makes Google’s Self-Driving Cars Work – TheAtlantic

Cycorp.

IBM’s Watson and Apple’s Siri stirred up a hunger and awareness throughout the U.S. for something like a Star Trek computer that really worked — an artificially intelligent system that could receive instructions in plain, spoken language, make the appropriate inferences, and carry out its instructions without needing to have millions and millions of subroutines hard-coded into it.

As we’ve established, that stuff is very hard. But Cycorp’s goal is to codify general human knowledge and common sense so that computers might make use of it.

Cycorp charged itself with figuring out the tens of millions of pieces of data we rely on as humans — the knowledge that helps us understand the world — and to represent them in a formal way that machines can use to reason. The company’s been working continuously since 1984 and next month marks its 30th anniversary.

“Many of the people are still here from 30 years ago — Mary Shepherd and I started [Cycorp] in August of 1984 and we’re both still working on it,” Lenat said. “It’s the most important project one could work on, which is why this is what we’re doing. It will amplify human intelligence.”

It’s only a slight stretch to say Cycorp is building a brain out of software, and they’re doing it from scratch.

“Any time you look at any kind of real life piece of text or utterance that one human wrote or said to another human, it’s filled with analogies, modal logic, belief, expectation, fear, nested modals, lots of variables and quantifiers,” Lenat said. “Everyone else is looking for a free-lunch way to finesse that. Shallow chatbots show a veneer of intelligence or statistical learning from large amounts of data. Amazon and Netflix recommend books and movies very well without understanding in any way what they’re doing or why someone might like something.

“It’s the difference between someone who understands what they’re doing and someone going through the motions of performing something.”

 

Ref: The Most Ambitious Artificial Intelligence Project In The World Has Been Operating In Near Secrecy For 30 Years – BusinessInsider