Thinking as a Hobby


Home
Get Email Updates
LINKS
JournalScan
Email Me

Admin Password

Remember Me

3478606 Curiosities served
Share on Facebook

Building Gods
Previous Entry :: Next Entry

Read/Post Comments (0)

I just came across the rough cut of a new documentary called Building Gods, which explores the ethical issues surrounding the engineering of intelligent (or "super-intelligent") machines, while mostly ignoring the feasibility of doing so. The movie consists mostly of interviews with four people related to the field of AI: Kevin Warwick (a cybernetics professor who has implanted chips in himself), Hugo de Garis (a computer science professor who has vocally warned about the eventual war between machine and humans), Anne Foerst (a theology professor who served as a "theological advisor" on projects in MIT's AI lab), and Nick Bostrom (a philosopher who helped co-found the World Transhumanist Association).

There's lots of stuff to talk about in this film, but mostly I found the comments of Hugo de Garis the most interesting. He doesn't come across as a wild-eyed, hysterical lunatic ranting about the end of the world. And I'd agree with some of what he says. I do think it is inevitable that artifacts will be engineered that rival or exceed the human capacity to learn and usefully apply that learning to difficult cognitive tasks. I also agree that it will be almost impossible to hard-wire such devices with something like Asimov's Three Laws of Robotics. Something with greater-than-human intelligence is going to have to have a very flexible and powerful ability to learn. That means it will be able to unlearn as well. Such a machine will need to be trained extensively, and that training is when humans will be able to habituate it to certain tendencies (such as treating humans nicely), but I'm extremely skeptical about software implementations of behavioral safeguards. To a certain extent, there might be hardware solutions (e.g. implanting a tiny bomb in each robot's brain that disables it remotely if the robot starts acting in antisocial ways), but if robots really began to exceed human intelligence, they'd find workarounds to such safeguards themselves.

Hugo de Garis starts sounding a bit flakier when he begins to talk about human factions in the future war. He actually wrote a book on this called The Artilect War:Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines. "Terrans" are supposedly the humans that will be against building such machines. There's already a decent general word for this, and that's Luddite. "Cosmists" are the people who will support building super-intelligent machines.

Honestly, at the point in time before such machines are developed, it won't matter much if there's a split opinion. There will always be a financial incentive to create machines and algorithms that move toward human-competitiveness. It won't matter if people opposed to such technology protest, unless such people are in charge of dictatorships who mandate technological backpeddling. But I don't see that happening.

And once intelligent machines are developed, I think the squabbling between human factions will be irrelevant. What will matter is what the machines want to do, and whether or not they split into varying factions. And that's just too out there to even speculate about.

The film compares the creation of such machines to the development of the nuclear bomb. I think there's a horrible disconnect with that analogy. For one, the development of strong AI is about the creation of something, potentially a new form of sentience. Nuclear weapon is purely destructive. In some sense there is no way to predict what will happen when machines exceed human intelligence. They may decide to wipe us out, to live and cooperate in harmony with us, or maybe even migrate to another planet and set up their own independent society. Who knows? I think researchers who contribute to the development of such machines should be aware of the possibilities, but shouldn't let the possible negative consequences of understanding some aspect of the universe necessarily stop them from doing so.


Read/Post Comments (0)

Previous Entry :: Next Entry

Back to Top

Powered by JournalScape © 2001-2010 JournalScape.com. All rights reserved.
All content rights reserved by the author.
custsupport@journalscape.com