Thinking as a Hobby


Home
Get Email Updates
LINKS
JournalScan
Email Me

Admin Password

Remember Me

3477201 Curiosities served
Share on Facebook

Selecting for Intelligence
Previous Entry :: Next Entry

Read/Post Comments (4)

Here's an interesting article on the future of AI about John Holland, considered the father of genetic algorithms.

A couple of comments in my post yesterday on AI seemed to suggest that true AI would never become sentient (or if that's not what you meant, please clarify).

John Holland seems to think true AI (that is, human level or beyond artificial cognition) is possible, though he thinks it's a long way off.


According to Holland, the problem with developing artificial intelligence through things like genetic algorithms is that researchers don't yet understand how to define what computer programs should be evolving toward. Human beings did not evolve to be intelligent-they evolved to survive. Intelligence was just one of many traits that human beings exploited to increase their odds of survival, and the test for survival was absolute. Defining an equivalent test of fitness for targeting intelligence as an evolutionary goal for machines, however, has been elusive. Thus, it is difficult to draw comparisons between how human intelligence developed and how artificial intelligence could evolve.


This is exactly on the mark, and it's an analysis that I don't hear often enough, even among the AI community.

Personally, I think the key to evolving a human level or greater intelligence is going to involve exploiting the indirect design methodology of evolution. I simply think cognition is far too complex to forward engineer, that is, to understand how all the parts work and to consciously put them together. Instead, I think we're going to have to set up an evolutionary paradigm (a virtual environment in which robotic individuals perform, are rated, then selected based on behavior and performance, and then mated...iterated millions of times).

The crucial key, as the article points out, is knowing the proper selection criteria. It's easy to know how to select for survival. You could create five artificial species of robots that all compete in a virtual environment for limited resources (e.g. an energy source like batteries that are constantly replenished in the environment). Those that best compete with other species, their own species, and their environment in attaining energy survive...those that don't run out of energy and are removed from the virtual environment.

In the experiment above, survival is the main selection criteria. That is, individuals that are good at surviving, are the ones that are selected for. This is the way natural evolution works. But it doesn't have to.

For example, we could give our imaginary virtual robots unlimited energy supplies, so that they no longer compete for resources. But what would they do all day? Well, we could give them another goal besides competing for energy.

If we wanted to create robots that could travel quickly across uneven terrain, we could program our virtual environment with lots of obstacles and simulated swamp and desert, put the robots in a starting area and indicate a target area. Then each generation we would select for those robots that traveled the furthest from the starting area to the target.

But what sorts of goals would we give our robots so that they would evolve towards intelligent behavior?

I've thought through this a bit, but I'd be interested to see what people out there think.

Well?


Read/Post Comments (4)

Previous Entry :: Next Entry

Back to Top

Powered by JournalScape © 2001-2010 JournalScape.com. All rights reserved.
All content rights reserved by the author.
custsupport@journalscape.com