Category Archives: T – model

Artificial General Intelligence (AGI)

But how important is self-awareness, really, in creating an artificial mind on par with ours? According to quantum computing pioneer and Oxford physicist David Deutsch, not very.

In an excellent article in Aeon, Deutsch explores why artificial general intelligence (AGI) must be possible, but hasn’t yet been achieved. He calls it AGI to emphasize that he’s talking about a mind like ours, that can think and feel and reason about anything, as opposed to a complex computer program that’s very good at one or a few human-like tasks.

Simply put, his argument for why AGI is possible is this: Since our brains are made of matter, it must be possible, in principle at least, to recreate the functionality of our brains using another type of matter — specifically circuits.

As for Skynet’s self-awareness, Deutsch writes:

That’s just another philosophical misconception, sufficient in itself to block any viable approach to AGI. The fact is that present-day software developers could straightforwardly program a computer to have ‘self-awareness’ in the behavioural sense — for example, to pass the ‘mirror test’ of being able to use a mirror to infer facts about itself — if they wanted to. As far as I am aware, no one has done so, presumably because it is a fairly useless ability as well as a trivial one.

[…]

If we really want to create artificial intelligence, we have to understand what it is we’re trying to create. Deutsch persuasively argues that, as long as we’re focused on self-awareness, we miss out on understanding how our brains actually work, stunting our ability to create artificially intelligent machines.

What matters, Deutsch argues, is “the ability to create new explanations,” to generate theories about the world and all its particulars. In contrast with this, the idea that self-awareness — let alone real intelligence — will spontaneously emerge from a complex computer network is not just science fiction. It’s pure fantasy.

 

Ref: Terminator is Wrong about AI Self-Awareness – BusinessInsider

Belief–desire–intention software model

The belief–desire–intention software model (usually referred to simply, but ambiguously, as BDI) is a software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent’s beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan library or an external planner application) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer.