Thinking as a Hobby


Home
Get Email Updates
LINKS
JournalScan
Email Me

Admin Password

Remember Me

3478523 Curiosities served
Share on Facebook

Lying Robots...Big Deal
Previous Entry :: Next Entry

Read/Post Comments (0)

Blogging on Peer-Reviewed Research

I've seen several references on blogs and in pop media on this study. Most of the headlines proclaim some variant of "Robots Learn How to Lie!" as if this is something new and extraordinary.

The research is carried out by Dario Floreano, who is a very good researcher and does great work with evolutionary robotics. But what actually went on in the study? Here's an online Discover magazine article and their summary:


Floreano and his colleagues outfitted robots with light sensors, rings of blue light, and wheels and placed them in habitats furnished with glowing “food sources” and patches of “poison” that recharged or drained their batteries. Their neural circuitry was programmed with just 30 “genes,” elements of software code that determined how much they sensed light and how they responded when they did. The robots were initially programmed both to light up randomly and to move randomly when they sensed light.

To create the next generation of robots, Floreano recombined the genes of those that proved fittest—those that had managed to get the biggest charge out of the food source.

The resulting code (with a little mutation added in the form of a random change) was downloaded into the robots to make what were, in essence, offspring. Then they were released into their artificial habitat. “We set up a situation common in nature—foraging with uncertainty,” Floreano says. “You have to find food, but you don’t know what food is; if you eat poison, you die.” Four different types of colonies of robots were allowed to eat, reproduce, and expire.

By the 50th generation, the robots had learned to communicate—lighting up, in three out of four colonies, to alert the others when they’d found food or poison. The fourth colony sometimes evolved “cheater” robots instead, which would light up to tell the others that the poison was food, while they themselves rolled over to the food source and chowed down without emitting so much as a blink.

Some robots, though, were veritable heroes. They signaled danger and died to save other robots. “Sometimes,” Floreano says, “you see that in nature—an animal that emits a cry when it sees a predator; it gets eaten, and the others get away—but I never expected to see this in robots.”


He's evolved controllers for robots that exhibit selfish behavior through deliberate miscommunication and altruistic behavior via warning calls. And that's neat, but is it really some sort of breakthrough? Um, no.

In less than an hour I could write an evolutionary algorithm to evolve simulated agents to play The Prisoner's Dilemma or any other number of simple games where agents can either cooperate or compete.

The Prisoner's Dilemma is a classic game theory situation. You have two bank robbers in separate interrogation rooms. A police officer asks each one: "Did you rob the bank?" If they both tell the truth, say "yes", and cooperate, then they will both go to jail, but for only 5 years. If they both lie and say "no", they both go free. But if one of them says "no" and the other says "yes," then the liar goes free and the truth-teller gets 15 years.

Depending on the values of the prison terms, you'll see different types of strategies emerge in the population. You'll see a mix of liars mixed with truth-tellers, which will cycle or settle into some sort of equilibrium.

I'm not sure why Floreano is quoted as saying he would never think this would happen in robots. It would be a surprising result if it didn't happen.

If "lying" is just misrepresenting the state of the world, it's fairly trivial to either hand-code or evolve an agent that "lies." This isn't to say that the research isn't interesting. It is. But a lot of times misunderstanding of research is a result of the way in which it's covered, and there's a regrettable tendency to exaggerate and anthropomorphize results in robotics research.

FLOREANO, D., MITRI, S., MAGNENAT, S., KELLER, L. (2007). Evolutionary Conditions for the Emergence of Communication in Robots. Current Biology, 17(6), 514-519. DOI: 10.1016/j.cub.2007.01.058


Read/Post Comments (0)

Previous Entry :: Next Entry

Back to Top

Powered by JournalScape © 2001-2010 JournalScape.com. All rights reserved.
All content rights reserved by the author.
custsupport@journalscape.com