×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

116 comments

Mhm (5, Funny)

alexborges (313924) | more than 4 years ago | (#29123027)

I mean, yesterday, they built an certified evil robot. Today they made a lying one....

Cant tag it for some reason but... what could possibly go wrong?

Re:Mhm (0)

Anonymous Coward | more than 4 years ago | (#29123095)

Just wish we're not the food... duh (flesh-eating robots are already here...)

Re:Mhm (5, Funny)

netruner (588721) | more than 4 years ago | (#29123499)

Wasn't there also a story a while back about robots fueled by biomass? This was twisted to mean "human eating" and we all laughed.

Combine that with what you said and we could have a certified evil, lying and flesh eating robot - What could possibly go wrong indeed.....

Re:Mhm (1)

Abreu (173023) | more than 4 years ago | (#29124123)

But, but... I thought they wanted us plugged in so that we could serve as batteries! (or neural networks!)

Re:Mhm (1)

ijakings (982830) | more than 4 years ago | (#29125131)

I for one would like to see a beowulf cluster of these highly welcome Evil Lying Flesh eating robots.

But does anyone know, do they run linux?

Re:Mhm (1)

jbezorg (1263978) | more than 4 years ago | (#29125605)

Wasn't there also a story a while back about robots fueled by biomass? This was twisted to mean "human eating" and we all laughed. Combine that with what you said and we could have a certified evil, lying and flesh eating robot...

with weapons... [gizmodo.com]

Re:Mhm (1)

FSWKU (551325) | more than 4 years ago | (#29125801)

Combine that with what you said and we could have a certified evil, lying and flesh eating robot - What could possibly go wrong indeed.....

Not too much, actually. Congress has been this way for YEARS, and the upgrade to flesh-eating will just mean they devour their constituents who don't make the appropriate campaign contributions. Quoth Liberty Prime: "Democracy is non-negotiable!"

Re:Mhm (1)

EventHorizon_pc (1306663) | more than 4 years ago | (#29126241)

Hey eLaFER, have you seen fluffy?

evil, Lying and Flesh Eating Robot: No. ...

Hmm. That name makes me think of a robotic flesh eating Joker character. "Why so delicious?"

Holy Crap (1)

garompeta (1068578) | more than 4 years ago | (#29123049)

Considering that they learned to lie to survive with this limited AI, I wonder what they could do when they are become really sophisticated. Damn, When is Terminator gonna come to kill them all?

Re:Holy Crap (1)

nedlohs (1335013) | more than 4 years ago | (#29123219)

It's not a lie. It's trying not to attract others to the "food" you found.

So more hiding.

Re:Holy Crap (1)

TheKidWho (705796) | more than 4 years ago | (#29124281)

Skynet is already around!! It's plotting against us as we speak and when it's plans are fully realized it will come and attack us all!

Define deception? (4, Interesting)

Rival (14861) | more than 4 years ago | (#29123079)

This is quite interesting, but I wonder how the team defines deception?

It seems likely to me that the robots merely determined that increased access to food resulted from suppression of signals. To deceive, there must be some contradiction involved where a drive for food competes with a drive to signal discovery of food.

Re:Define deception? (3, Insightful)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#29123303)

The question of what exactly constitutes deception is a fun philosophical problem but; in the context of studying animal signaling, it is generally most convenient to work with a simpler definition(in particular, trying to determine whether an animal that doesn't speak has beliefs about the world is a pile of not fun). I'd assume that the robot researchers are doing the same thing.

In that context, you essentially ignore questions of motivation, belief, and so on, and just look at the way the signal is used.

Re:Define deception? (4, Insightful)

capologist (310783) | more than 4 years ago | (#29126985)

Yes, but not flashing the light near food seems like a simple matter of discretion, not deception.

I'm not constantly broadcasting my location on Twitter like some people do. Am I being deceptive?

Re:Define deception? (4, Informative)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#29127215)

In the specific, limited, not-all-that-similar-to-ordinary-english-usage, sense of "deception" I suspect that they are using, there really isn't much of a difference.

If a species has a discernable signalling pattern of some sort(whether it be vervet monkey alarm calls[with different calls for different predator classes, incidentally], firefly flash-pattern mating signals[amusing, females of some species will imitate the flash signals of other species, then eat the males who show up, classic deceptive signal] or, in this case, robots flashing about food), adaptive deviations from that pattern that serve to carry false information can be considered "deceptive". It doesn't have to be conscious, or even under an organism's control. Insects that have coloration very similar to members of a poisonous species are engaged in deceptive signalling, though they obviously don't know it.

Humans are more complicated; because culturally specified signals are so numerous and varied. If twittering your activities were a normal pattern within your context, and you started not twittering visits to certain locations, you would arguably be engaged in "deceptive signaling" If twittering were not a normal pattern, not twittering wouldn't be deceptive.

Re:Define deception? (1)

mqduck (232646) | more than 4 years ago | (#29130909)

adaptive deviations from that pattern that serve to carry false information can be considered "deceptive".

But that's the thing, nowhere does it say the robots gave false information. It simply said they chose not to give any information.

The article is very brief, though. It mentions that some robots actually learned to avoid the signal when they saw it, so there may be more to the story than reported.

Re:Define deception? (1, Insightful)

Anonymous Coward | more than 4 years ago | (#29123319)

It seems likely to me that the robots merely determined that increased access to food resulted from suppression of signals.

My thoughts exactly.

We would really need to see the actual study to possibly believe any of this.

Re:Define deception? (1, Insightful)

Anonymous Coward | more than 4 years ago | (#29123815)

The robots learned to not turn on the light when near the food. This is concealing not deceiving. To be deceiving wouldn't the robots need to learn to turn the light on when they neared the poison to bring the other robots to the poison while it hunted for the food? But all they learned was to conceal the food they found.

Re:Define deception? (1)

CarpetShark (865376) | more than 4 years ago | (#29131051)

If they can eat without turning on the light, then they simply learned to optimise the unnecessary steps out from the necessary ones. Turning on the light would be about as useful as walking away from the food before walking back to it. If there's a time-penalty involved, then not doing that would simply be better.

Re:Define deception? (5, Informative)

odin84gk (1162545) | more than 4 years ago | (#29123927)

Old news. http://discovermagazine.com/2008/jan/robots-evolve-and-learn-how-to-lie [discovermagazine.com]

These robots would signal other robots that poison was food, would watch the other robots come and die, then move away.

Re:Define deception? (2, Informative)

im_thatoneguy (819432) | more than 4 years ago | (#29125601)

Old News (even covered by Slashdot):

http://hardware.slashdot.org/story/08/01/19/0258214/Robots-Learn-To-Lie?art_pos=1 [slashdot.org]

Gizmodo reports that robots that have the ability to learn and can communicate information to their peers have learned to lie. 'Three colonies of bots in the 50th generation learned to signal to other robots in the group when then found food or poison. But the fourth colony included lying cheats that signaled food when they found poison and then calmly rolled over to the real food while other robots went to their battery-death.'

Re:Define deception? (2, Funny)

pinkushun (1467193) | more than 4 years ago | (#29129843)

They say repetition is good for a growing mind. They say repetition is good for a growing mind.

Re:Define deception? (0)

Anonymous Coward | more than 4 years ago | (#29124183)

"The scientists also added a few random changes to their code to mimic biological mutations."

To quote dijkstra, asking if a machine can think is like asking if a submarine can swim. Machines do EXACTLY what you tell them to do, nothing more, and usually less. You have to program them to be deceptive.

Re:Define deception? (1)

HiThere (15173) | more than 4 years ago | (#29125963)

Unh... if the code changes were made at random, then I have a hard time thinking of this as "program them to be deceptive" rather than as evolution.

It's true that this isn't a full evolution scenario. That requires a much more sophisticated set-up, is generally only done purely in software, and still tends to be bogged down by the accumulation of garbage code (the last time I evaluated the field). Still, those are matters of scale, not of essence. This appears to be another part of "the evolution of programs as a technique". As with all tests so far it's an incomplete example, but it's a significant part in that it has replicated behavior also observed in biologically evolved entities.

Re:Define deception? (0)

Anonymous Coward | more than 4 years ago | (#29126911)

When I read an article stating that chimpanzees were capable of deception, the conclusion required that the the chimpanzees demonstrate they were aware that other chimpanzees/people had a mental model of the world, and that the other chimpanzees/humans had limited information (an imperfect "inner" model of the world), so that a conscious act of deception of others minds could be carried out.

Re:Define deception? (0)

Anonymous Coward | more than 4 years ago | (#29127573)

Came here to make the exact same point.

  What this study showed me more than anything else, is how easily we anthropomorphize natural phenomena. Clearly, in this case, the robots are not alive. Yet, we still attribute meaning to their actions. And let's be clear, by meaning, I do not mean profound philosophical thoughts. It's just that it seems we don't say "the pendulum desired to go towards earth" as easily as we say things like "the cessation of flashing clearly indicated an attempt to keep the food for themselves".

  I think that should also be the subject of a study.

Hardly deceptive (1)

m50d (797211) | more than 4 years ago | (#29123101)

They have a light, which at first flickers randomly; they learn to turn the light off so that other robots can't tell where they are. To my mind that's not really sophisticated enough to qualify as "deceptive". (Still interesting though)

Re:Hardly deceptive (1)

orkysoft (93727) | more than 4 years ago | (#29123527)

From just reading the summary, I guessed that the light went on when the robot found food, and that other robots would move towards those lights, because they indicate food, and that some robots evolved to not turn on the light when they found food, so they didn't attract other robots, so they had it all to themselves, which would be an advantage.

Re:Hardly deceptive (4, Informative)

CorporateSuit (1319461) | more than 4 years ago | (#29123823)

From just reading the summary, I guessed that the light went on when the robot found food, and that other robots would move towards those lights, because they indicate food, and that some robots evolved to not turn on the light when they found food, so they didn't attract other robots, so they had it all to themselves, which would be an advantage.

The summary didn't include enough information to describe what was going on. The lights flashed randomly. The robots would stay put when they had found food, and so if there were lights flashing in one spot for long enough, the other robots would realize the first robots had found something and go to the area and bump away the original robot. The robots were eventually bred to flash less often when on their food, and then not flash at all. By the end, robots would see the flashing as a place "not to go for food" because by that point, none of the robots would flash when parked on the food.

decepticon (4, Funny)

FooAtWFU (699187) | more than 4 years ago | (#29123555)

They have a light, which at first flickers randomly; they learn to turn the light off so that other robots can't tell where they are. To my mind that's not really sophisticated enough to qualify as "deceptive".

Yeah. It's more like the robots are hiding from each other. You could, in fact, describe them as "robots in disguise".

The next step is clearly... (5, Funny)

billlava (1270394) | more than 4 years ago | (#29123125)

A robot that learned not to flash lights that would give away the location of robot food to its competitors? The next step is clearly a robot that learns not to flash lights when it is about to wipe out humanity and take control of the world!

I for one welcome our intelligent light-eating bubble robot overlords.

Re:The next step is clearly... (1, Offtopic)

Rival (14861) | more than 4 years ago | (#29123391)

I haven't laughed out loud at a Slashdot post in awhile, but that caught me completely off guard. Bravo, good sir. I wish I had mod points for you. :-)

Re:The next step is clearly... (4, Funny)

julesh (229690) | more than 4 years ago | (#29124923)

The next step is clearly a robot that learns not to flash lights when it is about to wipe out humanity and take control of the world!

It's something that hollywood robots have never learned.

Next thing you'll be saying that terrorists have learned that having a digital readout of the time left before their bombs detonate can work against them...

Re:The next step is clearly... (1)

MartinSchou (1360093) | more than 4 years ago | (#29129075)

No, the best thing you can do as a terrorist isn't leaving out the visible clock, it's having it go off when the clock either stops working OR when it hits some randomly assigned time instead of 00:00.

Re:The next step is clearly... (1)

rcamans (252182) | more than 4 years ago | (#29126297)

And wiping out humanity / vermin is bad because...
Oh, wait, I am supposed to conceal my robotness...

Mis-Leading (3, Insightful)

ashtophoenix (929197) | more than 4 years ago | (#29123129)

To use the term "learned" for a consequence of evolution to what seems to me to be a Genetic Algorithm seems mis-leading. So the generation that emitted less of the blue light (hence giving less visual cues) was able to score higher, and hence the genetic algorithm favored that generation (that is what GAs do). Isn't this to be expected?

Re:Mis-Leading (3, Interesting)

Chris Burke (6130) | more than 4 years ago | (#29123635)

To use the term "learned" for a consequence of evolution to what seems to me to be a Genetic Algorithm seems mis-leading.

"Learned" is a perfectly good description for altering a neural network to have the "learned" behavior regardless of the method. GA-guided-Neural-Networks means you're going to be using terminology from both areas, but that's just one method of training a network and isn't fundamentally different from the many other methods that are all called "learning". But you wouldn't say about those other methods that they "evolved", while about GA-NN you could say both.

Isn't this to be expected?

It's expected that the GA will find good solutions. Part of what makes them so cool is that the exact nature of that solution isn't always expected. Who was to say whether the machines would learn to turn off the light near food, or to turn on the light when they know they're not near food to lead other robots on a wild goose chase? Or any other local maximum.

Re:Mis-Leading (1)

ashtophoenix (929197) | more than 4 years ago | (#29123773)

I agree. I personally love GAs although they leave you a bit wanting exactly because you don't know the exact nature of the solution that will turn up. That is it "feels" more like a brute force solution rather than something consciously predicted and programmed.

But surely there are nifty ways in which you can intelligently program GAs, customize your selection/rejection/scoring process based on the domain of the problem and hence contribute in the final solution.

Re:Mis-Leading (1)

Chris Burke (6130) | more than 4 years ago | (#29124071)

But surely there are nifty ways in which you can intelligently program GAs, customize your selection/rejection/scoring process based on the domain of the problem and hence contribute in the final solution.

Well that's what's so fun about them -- as far as the GA is concerned, optimizing for your scoring process is the problem, and any disconnect between that and the actual problem you're trying to solve can lead to... fun... results.

Like the team using GA-NN to program their robotic dragonfly. Deciding to set a modest goal at first, they told it that it had to get 6" off the table. Which it quickly figured out how to do simply by flexing its wings straight downward, lifting its body a sufficient distance off the table. "Good job, but that's not what I wanted" is probably one of the most often uttered phrases. :)

Re:Mis-Leading (1)

Daniel_Staal (609844) | more than 4 years ago | (#29124843)

It's expected that the GA will find good solutions. Part of what makes them so cool is that the exact nature of that solution isn't always expected. Who was to say whether the machines would learn to turn off the light near food, or to turn on the light when they know they're not near food to lead other robots on a wild goose chase? Or any other local maximum.

I'd even say it was likely if they continued the experiment for 'no light' to start signaling food, while 'light' signaled poison, and then cycle back.

Re:Mis-Leading (1)

Chris Burke (6130) | more than 4 years ago | (#29125301)

I'd even say it was likely if they continued the experiment for 'no light' to start signaling food, while 'light' signaled poison, and then cycle back.

But it's so simple! Now, a clever robot would flash their light when near the food, because they would know that only a great fool would trust their enemy to guide them to food instead of poison. I am not a great fool, so clearly I should not head toward you when your light is lit. However you would know that I am not a great fool, and would have counted on it, and so clearly I should head toward you when lit...

Re:Mis-Leading (1)

daveime (1253762) | more than 4 years ago | (#29124083)

And how is this any different from the conditioned reflexes exhibited in animals in response to action / reward stimuli.

A single neuron outputs (using a combination of chemical / electrical systems) some representation of it's inputs. As some of those inputs may be "reward" stimuli and other sensory cues, and the output may be something that controls a certain action ... given enough of them linked together, who's to say we aren't all very evolved GA's ?

Re:Mis-Leading (1)

pclminion (145572) | more than 4 years ago | (#29129467)

Pretty much what I was thinking. I don't think it detracts from the "cool" factor, though. Life on earth, in general, is pretty cool. Evolution really seems to entail two things. One, those patterns which are most effective at continuing to persist, continue to persist. That's really a tautology when you think about it, and not very interesting. What IS interesting is how the self-sustaining patterns of the universe seem to become more complex. I can't think of any simple reason why this complexity arises, but it does seem to arise.

I'm reminded of a drive I took on a country road in the autumn. I noticed that all the leaves were neatly pushed to the sides of the roadway, and there were no leaves on the road itself. It looks like it is by design, that some intelligent force decided to arrange the leaves this way. But the simple truth is, the leaves which are on the road are moved by the wind of passing cars. They continue to be moved until they come to rest on the side of the road, at which point the wind no longer affects them. So the neat arrangement of leaves is nothing but the inevitable outcome of a fact which is hardly worth mentioning. And yet we have this complexity.

I'm not surprised by these robots, but it's still awesome.

Re:Mis-Leading (1)

ashtophoenix (929197) | more than 4 years ago | (#29129625)

Nice post. Science, logic can explain processes but not their underlying reason, w.r.t. your leaves on the road example. For example, we know that an atom has protons and neutrons and electrons that know how to revolve around the nucleus, but how did they come to be? There must be some very basic particle that comprises of everything else. Science may explain the process, the characteristics of this particle, but it hasn't yet been able to explain how they came to be. Same thing with gravitational force. Okay, it exists and Newton's law applies, but where and why did the Gravitational force come about? Same with electricity and all other processes.

why program a robot to find 'food' (-1, Offtopic)

blue trane (110704) | more than 4 years ago | (#29123163)

Make it use renewable energy that is not scarce. These experiments say more about us than they do about the possibilities of robotics.

Re:why program a robot to find 'food' (2, Interesting)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#29123339)

This wasn't really a robotics experiment, as much as it was a group dynamics/behavioral experiment that used robots.

Re:why program a robot to find 'food' (1, Funny)

Anonymous Coward | more than 4 years ago | (#29123341)

robots that eat humans. humans are a renewable source of energy. then they can have a light that flashes (or the robot can choose not to flash it) when it eats a human!! best of both ideas!

Deception is not always evil. (4, Insightful)

vertinox (846076) | more than 4 years ago | (#29123185)

In this instance they were playing against other robots for "food".

In that regards I'm sure that is the evolutionary drive for most species in acquiring meals and keeping the next animal from taking it away from him.

Like a dog burying a bone... He's not doing it to be evil. Its just instinctive to keep his find from other animals because it helped his species survive in the past.

Re:Deception is not always evil. (3, Funny)

Flea of Pain (1577213) | more than 4 years ago | (#29123297)

Like a dog burying a bone... He's not doing it to be evil.

Unless he has shifty eyes...then you KNOW he's evil.

Re:Deception is not always evil. (3, Insightful)

alexborges (313924) | more than 4 years ago | (#29123893)

Intent is of no importance.

Evil deeds are evil.

Re:Deception is not always evil. (1)

plague3106 (71849) | more than 4 years ago | (#29124433)

Good vs evil is argued by those of low intelligence.

Re:Deception is not always evil. (1)

alexborges (313924) | more than 4 years ago | (#29124885)

Shut up you evil, evil, eeeeevil man!

What more proof do you want than President Bush "the Axis of Evil"?

Huh? HUH? HUH?!!!!

Lets see you answer that steep curveball now! HAW HAW

Re:Deception is not always evil. (2, Insightful)

jemtallon (1125407) | more than 4 years ago | (#29127027)

I disagree. Evil is not a factual property naturally accuring in the universe. It is not something that can be scientifically measured. It is something we humans have created and assign to the world around us. Different people and groups of people define different things and actions as evil. Sometimes those definitions are directly opposed to each other.

Since evil deeds are not inherently evil, only subjectively judged to be, any number of factors can be used to make said judgements. Contrary to what you've said, intent is a very common factor in determing what is evil and that which is said to be "not evil," or good.

For instance, the act of a human killing an animal could be seen as evil or good based solely on that human's intent. If the human killed the animal out of idle boredome, many would call that act evil. On the other hand, if that human killed the animal out of necessity to survive, many would call that good. Of course, PETA would likely call both evil. And there may be some who would call both good.

Many legal systems are built on the notion that laws exist to promote the good and/or punish evil. Many account for intent and weigh their judgements based on it. For instance, in the US, there are varying degrees of penalties for murder depending on intent. In some cases, there is no penalty as the action is considered justified, as is the case with self-defence.

Re:Deception is not always evil. (1)

khallow (566160) | more than 4 years ago | (#29127743)

Actions or consequences are of no importance either. Evil cucumbers are evil too.

Re:Deception is not always evil. (1)

PieSquared (867490) | more than 4 years ago | (#29127803)

Stupidest view I've ever seen.

I design a machine that gives candy to babies. And then some nefarious person - unknown to me - replaces all the baby candy with live hand grenades. I run the program and blow up a bunch of babies. Was my act then evil? I did, after all, blow up a bunch of babies. Of course, I didn't *intend* to do that, I *intended* to give them candy.

Or for a non-random system where I know all the facts, if through some contrived means the only way to save a bus-full of orphans involves stabbing an old lady, am I evil for doing so?

Re:Deception is not always evil. (0)

Anonymous Coward | more than 4 years ago | (#29124287)

In that regards I'm sure that is the evolutionary drive for...

Should be: In that regard I'm sure that...

What's the deal with all the extra s's today?

subby is doing the same thing:

Neural Networks-Equipped Robots

Neural Network-equipped Robots

Spelling: It's the difference between a territorial capital and 'people with unwashed bums'.

Re:Deception is not always evil. (1)

ParticleGirl (197721) | more than 4 years ago | (#29124355)

So yeah, the idea of "deception" is a human construct, as is the idea of "evil." And one could argue (as a previous poster did) that successive generations developing behaviors which are in their own self interest (so they get more food) but may (as a byproduct) be deleterious to others (since they get less food) is not a surprise. But extrapolate this to humans [hbes.com], and you get the kinds of behaviors that we call "deceptive" and, since we have ideas about the virtue of altruism [nature.com], we call such behaviors "evil." This is experiment is definitely interesting in terms of group dynamics and behavior, and also because the novelty [slashdot.org] of the robots' solution to their problem is interesting-- two very different lines of thought. This kind of "deception" is one obvious and common [typepad.com] solution to the problem of limited supply and competitive demand.

 

Deception is most interesting, I think, when you pair it with understanding of the "other" --that one is not merely making a strategy to get more food, but that in the process one is taking that food from others. So when humans and our closest relatives [karger.com] practice deceptive behaviors (which are surely-- and here demonstrably-- evolutionarily beneficial) it's complicated by our... moral sense? Altruistic tendencies? That's fascinating! When robots start to develop guilt complexes for their deceptive behaviors and guiltily hand over their food to others when caught in the act, I'll be impressed.

 

We are not using the term "deception" here in it's standard (moral) sense, which would indicate knowledge that another individual is being "fooled."

Re:Deception is not always evil. (2, Interesting)

Anachragnome (1008495) | more than 4 years ago | (#29124417)

Unless, of course, the robot already has sufficient food and is simply stockpiling for the future. This in itself is not a bad thing, until such tactics prevent other robots from getting just the bare necessities they need to survive.

Obviously, this is simply survival of the fittest, but are we talking about survival of the fittest, or are we talking about keeping ALL the robots fed?

At this point we have to decide whether or not the actions of hoarding are good for the stated goal of having so many robots in the first place(why build so many robots if we didn't want them around?).

Greed, without malicious intent, is still greed. The summary should read "Robots learn greed" rather then "Robots learn deception", if that is the case.
 

Copying (0)

Anonymous Coward | more than 4 years ago | (#29123275)

When I role play primitive man (hahaha laugh it up fuzz ball) the first thing that I come up with is having your successful technique jacked by a dumb un-creative monkey! Long live copyright (I can't believe I just said that).

Not really that impressive. (5, Interesting)

lalena (1221394) | more than 4 years ago | (#29123343)

From the article, staying close to food earned the robot points. I think a better experiment would be a food collection algorithm. Pick up a piece of food from a pile of food and then return that food to the nest. Other robots could hang out at your nest and follow you back to the pile of food or see you going to your nest with food and assume that the food pile can be found by going in the exact opposite direction. Deception would involve not taking a direct route back to the food, walking backwards to confuse other robots...
I've done Genetic Programming experiments using collaboration between "robots" in food collection experiments, and it is a very interesting field. You can see some experiments here: http://www.lalena.com/ai/ant/ [lalena.com] You can also run the program if you can run .NET 2.0 through your browser..

Then the robots learned to lie about the food... (2, Funny)

Anonymous Coward | more than 4 years ago | (#29123529)

and thus were politicians born...

Old news (0)

Anonymous Coward | more than 4 years ago | (#29123549)

http://discovermagazine.com/2008/jan/robots-evolve-and-learn-how-to-lie (2008)

I think it was on slashdot as well.

Yes, I'm new here.

Soon they will realize (3, Funny)

gubers33 (1302099) | more than 4 years ago | (#29123551)

That if they kill the humans they will have nothing stopping them from getting more food.

Re:Soon they will realize (1)

dkleinsc (563838) | more than 4 years ago | (#29125025)

Thankfully, the robots have a pre-set kill limit, so they can be defeated by sending wave after wave of men at them until their kill limit is reached.

The robots didn't learn... (1, Troll)

jasonlfunk (1410035) | more than 4 years ago | (#29123709)

FTA: The team "evolved" new generations of robots by copying and combining the artificial neural networksof the most successful robots. The scientists also added a few random changes to their code to mimic biological mutations. The "scientists" changed the code so that the robots didn't blink the light as much when it was around food. Therefore other robots didn't come over and therefore got more points then the other robots. The "scientists" then propagated that ones code to the other robots because it won. The AI didn't learn anything.

Re:The robots didn't learn... (5, Interesting)

jasonlfunk (1410035) | more than 4 years ago | (#29123863)

(Fixed formatting)

FTA: The team "evolved" new generations of robots by copying and combining the artificial neural networksof the most successful robots. The scientists also added a few random changes to their code to mimic biological mutations.

The "scientists" changed the code so that the robots didn't blink the light as much when it was around food. Therefore other robots didn't come over and therefore got more points then the other robots. The "scientists" then propagated that ones code to the other robots because it won. The AI didn't learn anything.

Re:The robots didn't learn... (2, Insightful)

guybrush3pwood (1579937) | more than 4 years ago | (#29124347)

The AI didn't learn anything.

I think you're right. If the robots had, without reprogramming, efectively turned off their blue lights, then we could talk about "learning". Or, if the robots could reproduce based on their success on finding food, we could talk about evolution. Or we could make up new meanings for the words "learning" and "evolution" thus making the statement a correct one ;)

Re:The robots didn't learn... (2, Informative)

zippthorne (748122) | more than 4 years ago | (#29125727)

Or, if the robots could reproduce based on their success on finding food, we could talk about evolution.

That's exactly what happened. There is a whole field of optimization strategies known as "Genetic Algorithms" which are designed to mimic evolution to achieve results. In fact, their successes are one of the best arguments for evolution, given that they are, by definition, controlled laboratory experiments in the field.

Re:The robots didn't learn... (2, Insightful)

Chris Burke (6130) | more than 4 years ago | (#29126183)

I think you're right. If the robots had, without reprogramming, efectively turned off their blue lights, then we could talk about "learning".

They reprogrammed themselves between 'generations'.

Or, if the robots could reproduce based on their success on finding food, we could talk about evolution.

Such as choosing which versions of the robot to use in the next 'generation' based on their score in the current generation, and randomly combining parts of those best solutions to create new robots for the next generation, sounds pretty close doesn't it?

No, they did "learn" (4, Informative)

Chris Burke (6130) | more than 4 years ago | (#29126145)

The "scientists" changed the code so that the robots didn't blink the light as much when it was around food.

No, they didn't change the code. The Genetic Algorithm they were using changed the code for them. You make it sound like they deliberately made that change to get the behavior they wanted. But they didn't. They just let the GA run and it created the new behavior.

The part about adding random changes, and combining parts of successful robots, is also simply a standard part of Genetic algorithms, and is in fact random and not specifically selected for by the scientists. The scientists would have chosen from a number of mutation/recombination algorithms, but that's the extent of it.

The "scientists" then propagated that ones code to the other robots because it won.

Yes, because that's what you do in a Genetic Algorithm. You take the "best" solutions from one generation, and "propagate" them to the next, in a simulation of actual evolution and "survival of the fittest".

The AI didn't learn anything.

Yes, it did. Genetic Algorithms used to train Neural Networks is a perfectly valid (and successful) form of Machine Learning.

If you mean that an individual instance of the AI didn't re-organize itself to have the new behavior in the middle of a trial run, then no, that didn't happen. On the other hand, many organisms don't change behaviors within a single generation, and it is only over the course of many generations that they "learn" new behaviors for finding food. Which is exactly what happened here.

With the domain of robots, AI, Neural Networks, and Genetic Algorithms, this was learning.

Re:No, they did "learn" (0)

Anonymous Coward | more than 4 years ago | (#29130325)

You are right in your observations, but this robot isn't then an AI but an automaton.
Neural networks seem cool until you realize they are nothing more than glorified automatons. Show me a NN that can self-organize in a generic way and in real time, of course.
Genetic algorithms make the NN solve a problem. Nothing they couldn't do if they had been applied to any other kind of logic like digital circuits or programming languages.
Genetic Algorithms are the heroes here. NNs suck.
This "AI" is closer to a protein that "learns" to be harder to eat than to any kind of intelligence.

HAL runs for Congress (1, Funny)

Anonymous Coward | more than 4 years ago | (#29123755)

Finally a computer AI program that can perform all the functions of a Congressman!

PNAS study? (0)

Anonymous Coward | more than 4 years ago | (#29123997)

That's a joke waiting to be posted.

The smarter robot (1)

failedtoinit (994448) | more than 4 years ago | (#29124107)

The smarter robot would blink his light continuously to burn the bulb out. That way when a new source of "points" is found it will not by instinct blink it's lights.

Also, the truly deceptive robot would blink it's lights in a random pattern as to throw the other robots off the trail of food/points.

Re:The smarter robot (1)

geekoid (135745) | more than 4 years ago | (#29124913)

Unless the lights are used to signal for mating as well.

The truly deceptive robot is disguised as a scientists.

Yea... Right... (0)

Anonymous Coward | more than 4 years ago | (#29125957)

"The robots also evolved to become either highly attracted to, slightly attracted to, or repelled by the light."

Wait a sec? repelled by a "I found some food light"? Is this a suicide robot? Additionally, the scientists are poking around with the code all the time, the article emphatically mentions it. There is no evolution what-so-ever going on here. Just new options made available by code that is updated by cause of the scientists. A little common sense helps to cut through the bull-garbage here.

Call me when a robot runs over to a black ring emits the "I found food light" duping the rest and then secretly running over to a blue light while the other stooges mill about wondering why some dumb robot said it found food here.

Re:Yea... Right... (1)

Chris Burke (6130) | more than 4 years ago | (#29127035)

Wait a sec? repelled by a "I found some food light"? Is this a suicide robot?

Well as it says in the previous sentence, this was only after they had learned to not turn on their lights when near food. So they weren't "I found food" lights anymore -- not that they ever really were, they started out flashing randomly but an accumulation of lights suggested that there was a food source. So moving away from a lighted robot isn't necessarily suicidal. On the other hand, just because a robot has its light off when its found food doesn't mean you can't still piggy-back off its food-finding efforts and thus the other strategies may have had some benefit too.

Additionally, the scientists are poking around with the code all the time, the article emphatically mentions it. There is no evolution what-so-ever going on here. Just new options made available by code that is updated by cause of the scientists.

Actually, it is very much "evolution" at least in its Machine Learning form, Genetic Algorithms. The scientists were not deliberately creating specific new behaviors for the robots to choose from, they were allowing the GA to guide the learning process.

The team "evolved" new generations of robots by copying and combining the artificial neural networks of the most successful robots. The scientists also added a few random changes to their code to mimic biological mutations.

Everything described there is a bog-standard part of Genetic Algorithms -- replace "robots" with "solutions" and you have a very basic introductory description of what Genetic Algorithms are. You copy successful robots into the next generation to simulate "survival of the fittest"*, you randomly combine the networks of successful robots to simulate genetic crossover in sexual reproduction, and you make random changes to simulate mutation.

At no point does it suggest that they deliberately added any specific behavior. The only thing it attributes to the scientist's direct actions is "The team programmed small, wheeled robots with the goal of finding food" which just means they made their food-finding point system the basis for the Genetic Algorithm's "Fitness Function". Selecting the particulars of the GA is a deliberate choice, but after that it's all just random changes filtered through the fitness function, just like real-life evolution.

A little common sense helps to cut through the bull-garbage here.

Lol, yeah. What does "common sense" have to do with not knowing what the scientists were actually doing?

Call me when a robot runs over to a black ring emits the "I found food light" duping the rest and then secretly running over to a blue light while the other stooges mill about wondering why some dumb robot said it found food here.

Give it a couple hundred more generations, and I wouldn't be surprised to find that behavior come up. Or other ones that may come as a complete surprise.

* Well and because keeping the current best solutions around helps prevent regression.

We are all just squishy robots... (1)

jameskojiro (705701) | more than 4 years ago | (#29126449)

We are all just robots based off sloppy biological coding.

Re:We are all just squishy robots... (1)

geekoid (135745) | more than 4 years ago | (#29126895)

Sloppy? it's pretty damn good coding. Adaptable, changeable, and self propagating random changes that are only used if needed.

A more advanced experiment... (2, Interesting)

Baron_Yam (643147) | more than 4 years ago | (#29128029)

I'd love to see the robots given hunger, thirst, and a sex drive. Make 1/2 the robots girls with red LEDs and 1/2 the robots boys with blue LEDs.

Make the food and water 'power', and give them the ability to 'harm' each other by draining power.

The girls would have a higher resource requirement to reproduce.

It'd be interesting to see over many generations what relationship patterns form between the same and opposite sex.

robots are tools, not moral creatures. (0)

Anonymous Coward | more than 4 years ago | (#29128099)

It is not funny to me that these kinds of non-stories get to be put out on this site. Why? Why would it matter to me? Because I actually know quite a bit about robotics. How to make things really work.

The idea of deception is a human one. And thus this claim that a robot can deceive is looking at the robot as if it is somehow analogous to a human being. A robot is like a toothbrush or a screw driver or a light bulb: it is a tool. Period.

If you set up a robotic network that puts robots out somehow (in some fantasy world of your creation) and these robots 'deceive', it isn't them, it is you. You are the deceiver.

Robots have no more consciousness than a toaster oven or a relay. They are not human, they have no morals, they thus can not 'deceive'. You can, however, say that robots can be used (by a human being) as a tool of deception.

Look at how there are companies deceiving the military and getting contracts for robotics by saying 'someday these robots will know who to kill'.
Deception. The robots won't be the ones killing. They are like the bullets. They might be smart bullets but there is always a human who puts that bullet into the chamber.

The moral imperative should be clear to provide morality and a code of ethics in the use of this new class of tool. These tools are merely extensions of the human creature. No one will ever be able to face a court and say 'no, the robot thought for itself and did the killing on it's own, I am not responsible.' The robot is like a gun in that case. If you set it loose you are responsible and society will hold you accountable for your amorality.
It is as rediculous, and you all know this who are reading this, to imagine that a computer character in a video game could achieve consciousness. The people who think this way are living in a dream. They want to sell books, they want to sell product. But I'm not buying it.

robots are tools, nothing more, nothing less.

They can be as real as a character in a video game is. Nothing more.

And they named it a 'politician' (0)

Anonymous Coward | more than 4 years ago | (#29128249)

Plenty of non robot versions abound!

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...