Thinking as a Hobby


Home
Get Email Updates
LINKS
JournalScan
Email Me

Admin Password

Remember Me

3478130 Curiosities served
Share on Facebook

Ray Kurzweil Interview on Instapundit
Previous Entry :: Next Entry

Read/Post Comments (0)

The populist proponent of strong AI gets interviewed by the granddaddy of the blogosphere.

They're talking about the singularity. Some interesting stuff in there, but right out of the box, Kurzweil says stuff like this:


I’ve consistently set 2029 as the date that we will create Turing test-capable machines. We can break this projection down into hardware and software requirements. In the book, I show how we need about 10 quadrillion (1016) calculations per second (cps) to provide a functional equivalent to all the regions of the brain.


Hah...I love these guys that just plot a line on a graph, and based on projected computational power say we're gonna have super-intelligent computers by so-and-so.

This is kind of like predicting when humans would first land on the moon. Could have been 1969, or could have been 2069. It's not just whether or not you have the technology capable of generating the thrust to escape the earth's atmosphere. It's a whole lot of other stuff...like geopolitics and money and war and national pride and on and on. You can't plot the emergence of a new technological paradigm neatly on a graph. It's tempting to do so, but it's just horribly wrong-headed.

And don't think that bit about "Turing test-capable" slipped by, either. This is one reason why AI has made so little progress. A Turing test-capable machine should not be a goal. To even have it in your head as a goal as an AI researcher is silly.

In his 1950 essay Turing offered a thought experiment. I'm gonna offer another one...maybe not original, but whether it is or not it's a hell of a lot more useful than Turing's conception.

Imagine we're astronauts traveling to a distant planet. First of all, how would we identify an intelligent entity if we came across it? It might not necessarily be living. It might be an artifact left behind by some dead alien race. Or it might be a member of an alien race.

Exactly how useful would the Turing test be in assessing such an entity's intelligence? Yes, jack squat.

We've got some decent working conception of life, so that if we came across it we'd recognize it reasonably well. We generally identify it as something that grows, metabolizes, reproduces, and reacts to stimuli. That's kind of a crude definition, but it works reasonably well.

Intelligence should be defined by a set of behaviors that are observable, identifiable, and quantifiable. I've already railed on enough about why the Turing Test is lousy. It's about damned time researchers came up with framework for identifying that hypothetical extraterristrial entity that might be intelligent. Because that's going to be a whole hell of a lot more analogous to what's really going to happen than something being designed that's gonna pass a Turing Test.

Something in a company or a university lab somewhere is gonna come about that may or may not process a natural human language, that may or may not think the way dogs and chimps and humans do. And yet we want to be ready to be able to recognize it and get some kind of handle on how it does compare to the way we think, well before that day comes.


Read/Post Comments (0)

Previous Entry :: Next Entry

Back to Top

Powered by JournalScape © 2001-2010 JournalScape.com. All rights reserved.
All content rights reserved by the author.
custsupport@journalscape.com