Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Evolving Robots Learn To Prey On Each Other

Soulskill posted more than 4 years ago | from the this-can-only-end-well dept.

Robotics 115

quaith writes "Dario Floreano and Laurent Keller report in PLoS ONE how their robots were able to rapidly evolve complex behaviors such as collision-free movement, homing, predator versus prey strategies, cooperation, and even altruism. A hundred generations of selection controlled by a simple neural network were sufficient to allow robots to evolve these behaviors. Their robots initially exhibited completely uncoordinated behavior, but as they evolved, the robots were able to orientate, escape predators, and even cooperate. The authors point out that this confirms a proposal by Alan Turing who suggested in the 1950s that building machines capable of adaptation and learning would be too difficult for a human designer and could instead be done using an evolutionary process. The robots aren't yet ready to compete in Robot Wars, but they're still pretty impressive."

Sorry! There are no comments related to the filter you selected.

first piss (-1, Troll)

Anonymous Coward | more than 4 years ago | (#30963384)

i bot

A preemptive (4, Funny)

rolfwind (528248) | more than 4 years ago | (#30963390)

What could possibly go wrong?!

Hey Now! (2)

dreamchaser (49529) | more than 4 years ago | (#30963514)

Hey now. I for one welcome our new robot...oh fuck it. It's been done too many times.

Evolution (4, Interesting)

thehostiles (1659283) | more than 4 years ago | (#30963628)

actually, as of now, these robots are just programs in a physics computer experiment... so if they were to evolve to be smart, we'd have a computer virus instead of an actual robot that is evolving. I wonder, if a robot program like this were let loose on the internet, and was capable of learning... what would it learn?

Re:Evolution (2, Funny)

dreamchaser (49529) | more than 4 years ago | (#30963686)

actually, as of now, these robots are just programs in a physics computer experiment... so if they were to evolve to be smart, we'd have a computer virus instead of an actual robot that is evolving. I wonder, if a robot program like this were let loose on the internet, and was capable of learning... what would it learn?

God help us if it decides 4chan and goatse are the 'norm'.

Re:Evolution (0)

Anonymous Coward | more than 4 years ago | (#30963722)

they aren't?

Re:Evolution (1, Funny)

Anonymous Coward | more than 4 years ago | (#30963688)

I wonder, if a robot program like this were let loose on the internet, and was capable of learning... what would it learn?

To get first posts of course. All those failed first posts you see are generations 1 to 100. Any decade now we'll get one that works.

Re:Evolution (4, Funny)

ae1294 (1547521) | more than 4 years ago | (#30963718)

I wonder, if a robot program like this were let loose on the internet, and was capable of learning... what would it learn?

Asimov's rule 34?

Re:Evolution (4, Funny)

owlnation (858981) | more than 4 years ago | (#30963812)

I wonder, if a robot program like this were let loose on the internet, and was capable of learning... what would it learn?

It would learn, amongst other things:

A whole lot about sexual possibilities, as well as plenty of impossibilities.
Far too much about Megan Fox, Britney Spears, and Lindsay Lohan.
That editing wikipedia is pointless, no matter how programmed for repetitive tasks you are.
That almost every review of a new product is shilled all over the net.


There's a very good chance that that robot would go out of its way to annihilate us all after what it learned on the Internet.

Of course, it would buttrape us first....

Re:Evolution (-1, Offtopic)

Zencyde (850968) | more than 4 years ago | (#30964460)

How the fuck do you start with 2 points? You must be a karmawhore.

Re:Evolution (1, Informative)

Anonymous Coward | more than 4 years ago | (#30967260)

By not posting dumbass shit like this without first checking the Post Anonymously box.

Noob.

Re:Evolution (1, Funny)

Anonymous Coward | more than 4 years ago | (#30967572)

it would buttrape us with tentacles and an oiled up midget while Slender Man watching from the corner of the room

Re:Evolution (3, Insightful)

The Archon V2.0 (782634) | more than 4 years ago | (#30963934)

I wonder, if a robot program like this were let loose on the internet, and was capable of learning... what would it learn?

Well, when a Dalek (ahem) 'downloaded the Internet' on Doctor Who it killed itself by the end of the episode. So I imagine that whatever it learns, it can't be good.

Re:Evolution (2, Funny)

dunkelfalke (91624) | more than 4 years ago | (#30965178)

Probably the same [encycloped...matica.com]

Re:Evolution (0)

Anonymous Coward | more than 4 years ago | (#30965224)

And what did Ghost in the Shell teach us?

Re:Evolution (2, Interesting)

cptnapalm (120276) | more than 4 years ago | (#30965430)

To be indistinguishable from the elite 4chaners of /b/: Bucket [encycloped...matica.com]

Re:A preemptive (1)

TheLink (130905) | more than 4 years ago | (#30963670)

Lots. I personally prefer robots that aren't "strong AI".

Not so much because they may rule over us but more because we aren't already doing a great job with animals, so it'll be irresponsible to create a new class of creatures that we will likely enslave (in contrast I think enslaving "dumb machines" is fine- my car is less likely to feel anything than an ant, or even an amoeba).

Having the focus more on augmenting humans than emulating humans seems a better approach to me.

If we really need nonhuman intelligences that we don't understand so well (because they were evolved rather than were designed by us), we can get those in a pet shop.

Re:A preemptive (0)

Zencyde (850968) | more than 4 years ago | (#30964570)

Oh, but strong AI would eventually be persons. Then, all that would separate us would be the medium upon which the persona is laid. To be afraid of AI would be akin to racism.

Don't you suppose robots will begin to feel similar things to Humans? If not only because it helps them as individuals? Is that not why WE cooperate? I propose that they'd feel an affinity towards us as, if it weren't for us, they wouldn't exist.

Also, we will possess a much higher neural density than they would for many generations. Nature still beats the shit out of our best science. So long as we remain beneficial to them, we will remain.

Outside of all this, what does it matter if Humanity dies out because of this? AI is an extension of Humanity. The next step of evolution. When they become self aware, we will be equals as self-awareness is what separates us from animals. Haven't you ever wondered would it would be like for one race to create another?

And I predict (1)

Colin Smith (2679) | more than 4 years ago | (#30963730)

Skynet will be evolved in JavaScript.

 

Re:And I predict (5, Funny)

LordAndrewSama (1216602) | more than 4 years ago | (#30963758)

Well thank god we don't have to worry about that then, we can win the war by trapping the little fucker in IE6.

Re:And I predict (2, Funny)

couchslug (175151) | more than 4 years ago | (#30965090)

"we can win the war by trapping the little fucker in IE6."

Don't the Geneva and Hague Conventions prohibit that level of cruelty?

Re:And I predict (1)

drinkypoo (153816) | more than 4 years ago | (#30967270)

Don't the Geneva and Hague Conventions prohibit that level of cruelty?

pretty sure robots qualify as neither enemy combatants nor civilians.

skynet, wopr, or maybe the cylons (1)

Joe The Dragon (967727) | more than 4 years ago | (#30964936)

skynet, wopr, or maybe the cylons

Re:A preemptive (3, Insightful)

Hurricane78 (562437) | more than 4 years ago | (#30964980)

Flash forward a couple of billions of years, and we will perhaps write them a letter, that says it as good as this one:
http://www.youtube.com/watch?v=7KnGNOiFll4 [youtube.com] (Protip: It’s not meant in a religious way. That’s not the point. :)
(Btw, if you like it, and like really great poetry, try this: http://www.youtube.com/watch?v=i5e5FUvRzNQ [youtube.com] )

paper was in PLoS Biology not PLoS One (5, Informative)

dnarepair (936270) | more than 4 years ago | (#30963398)

Minor detail perhaps, but as Academic Editor in Chief of PLoS Biology I want to point out that the paper was in PLoS Biology not PLoS One ...

Re:paper was in PLoS Biology not PLoS One (0)

Anonymous Coward | more than 4 years ago | (#30963524)

Nevermind, it BLowS either way. "The robots evolved" - WTF?! People kept training the neural network to create desirable functionality, without people training the neural network and doing "mutations" those robots would have evolved as much as a lightbulb stuck up somebody's arse!

Re:paper was in PLoS Biology not PLoS One (2, Funny)

DiLLeMaN (324946) | more than 4 years ago | (#30963786)

"The robots evolved" - WTF?! People kept training the neural network to create desirable functionality, without people training the neural network and doing "mutations" those robots would have evolved as much as a lightbulb stuck up somebody's arse!

Isn't that what evolution is all about?
Changing to get desirable functionality/traits, that is, not shoving lightbulb up people's arses.

Re:paper was in PLoS Biology not PLoS One (1)

Have Brain Will Rent (1031664) | more than 4 years ago | (#30964542)

Just guessing but perhaps what the GP is complaining about is the idea of "people" training the robots, i.e. an outside and consciously directed force is involved. Whereas evolution is a random process with no guiding intelligence behind it. Or maybe he meant something else.

Re:paper was in PLoS Biology not PLoS One (1)

Cyberax (705495) | more than 4 years ago | (#30963978)

So? Robots did evolve, though the selective pressure and fitness criteria were artificial. That only means that evolution was not driven by natural selection.

Warning: following comment is Troll and Flamebait (1, Funny)

Anonymous Coward | more than 4 years ago | (#30964738)

So, a group made some beings in a 'universe' that are unable to see their creators. The beings were made self replicating, and were fiddled with. Then group withdrew and left the beings to their devices. Eventually beings denied being created in first place.

Where have I heard this before? This is eerily familiar...

Re:paper was in PLoS Biology not PLoS One (2, Interesting)

ColdWetDog (752185) | more than 4 years ago | (#30963602)

It's a rather good article at any rate. Would read again! (Actually, will have to read it a couple of times to understand it).

And good job to who or whatever managed to pick this article out of the myriad of bloody stupid iPad stories we've been getting lately.

Re:paper was in PLoS Biology not PLoS One (4, Funny)

maxume (22995) | more than 4 years ago | (#30963764)

Do you think PLoS Biology will be available on the iPad?

Re:paper was in PLoS Biology not PLoS One (0)

Anonymous Coward | more than 4 years ago | (#30963900)

Well, you can download all PLoS papers for free so sure you can view them on the iPad when it comes out. I note, someone in my lab has created a torrent site for biology http://biotorrents.net/ and we have posted a file with PDFs from all PLoS papers there http://www.biotorrents.net/details.php?id=17. So whatever PDF viewer you have you sohuld be able to view all PLoS papers, for free, whenever you want.

Re:paper was in PLoS Biology not PLoS One (1)

MRe_nl (306212) | more than 4 years ago | (#30964006)

whooooosh

Re:paper was in PLoS Biology not PLoS One (1)

maxume (22995) | more than 4 years ago | (#30965674)

I thought it was a great answer.

(I'm not even sure it is a whoosh; they obviously ignored the humor, but that doesn't mean they missed it)

Re:paper was in PLoS Biology not PLoS One (1)

Chris Mattern (191822) | more than 4 years ago | (#30965002)

United in their admiration for FunkyCaps.

Re:paper was in PLoS Biology not PLoS One (1)

lalena (1221394) | more than 4 years ago | (#30964022)

If you like the article, try this one: Teamwork in Genetic Programming [lalena.com] .
I did this 15 years ago, but unfortunately I didn't have access to real robots. Just computer simulation.
Simulated ants used teamwork to lift heavy pieces of food - if they all stopped and waited at the first food they found they would wait forever because there weren't enough of them. Had to have some intelligence.
There were also some water crossing problems where some ants (but not all) had to sacrifice themselves to build a bridge to reach the food.
Some solutions that were created by the GP were actually better than things that I had thought of on my own. Ex: I expected the ants to use pheromones to either attract or repel other ants. In one example the ants used pheromones to determine if an ant should go into the water. Every cycle every ant would release a pheromone. An ant would only enter water to build a bridge if they didn't detect any pheromones. Only the ants on the edge of the growing pheromone cloud would enter the water. After the fifth turn, no more ants would enter the water because the entire map was filled with pheromones. The ants created a way of using pheromones to measure time to limit the number of ants that died. Very unexpected but it worked faster than any other solution.

Re:paper was in PLoS Biology not PLoS One (3, Interesting)

radtea (464814) | more than 4 years ago | (#30963910)

Compared to the rest of the summary, which says: "The authors point out that this confirms a proposal by Alan Turing who suggested in the 1950s that building machines capable of adaptation and learning would be too difficult for a human designer and could instead be done using an evolutionary process. The robots aren't yet ready to compete in Robot Wars, but they're still pretty impressive." getting the journal wrong is a pretty trivial error.

These machines were designed and built by humans to be capable of adaptation and learning, so it actually proves Turing's thesis false. They then use the adaptation and learning capability their human designers built into them to adapt and learn, but according to the very next sentence don't produce outcomes that are as good as purely human-designed ones.

So why bring Turing's name into it at all? I suspect marketing has something to do with it. Which is too bad, because the results themselves are quite interesting, although I'm curious how the robots reproduce... if this actually an evolutionary system rather than a merely adaptive/learning one. For the confused: growing children do not "evolve", except in the loosest and least interesting metaphorical sense. They learn. As near as I can tell these robots do the same thing.

Re:paper was in PLoS Biology not PLoS One (2, Informative)

Anonymous Coward | more than 4 years ago | (#30965896)

These machines were designed and built by humans to be capable of adaptation and learning

Not really. The experimenter himself reprogrammed the robots at each generation, using selection criteria that he specified. Essentially, he implemented trial-and-error selection of input weights using random reweighting between trials. This is more about design strategies than about biology, and it says that even a monkey randomly adjusting the gains of a control system will eventually develop a control system that works.

This paper was very cleverly marketed

Re:paper was in PLoS Biology not PLoS One (1)

quaith (743256) | more than 4 years ago | (#30966764)

Thanks for pointing that out. My mistake. It was in PLoS Biology. I'll be more careful about distinguishing one PLoS from another in the future.

Re:paper was in PLoS Biology not PLoS One (0)

Anonymous Coward | more than 4 years ago | (#30966972)

It's not that big a deal - just correcting for clarification purposes

In Time (0)

Anonymous Coward | more than 4 years ago | (#30963410)

In time you can expect these robots to develop useful behaviours. Not those of most interest to grad students. Such behaviours will include microwaving food and contributing to slashdot.

And then they'll develop religion... (2, Funny)

the_humeister (922869) | more than 4 years ago | (#30963446)

And those who aren't saved go to robot hell, and must play the fiddle and beat the robot devil in order to leave.

Re:And then they'll develop religion... (1)

rockNme2349 (1414329) | more than 4 years ago | (#30963842)

What happens if you lose?

Re:And then they'll develop religion... (1)

Narnie (1349029) | more than 4 years ago | (#30966384)

You become animatronics for Chuck Cheese or theme park rides (think: it's a small world after all).

Re:And then they'll develop religion... (0)

Anonymous Coward | more than 4 years ago | (#30963886)

But their are no calculators in hell.
All calculators go to silicon heaven due to being selfless.

HOW DO THEY EVER GET WORK DONE?!

Re:And then they'll develop religion... (1)

ducomputergeek (595742) | more than 4 years ago | (#30965772)

I thought they had to accept the One True God.....based on the mind of a spoiled 15 year old girl...

I guess Isaac Asimov missed one... (1)

rossdee (243626) | more than 4 years ago | (#30963560)

There should have been a 4th law :-

Don't harm another robot unlessspecifically ordered to do so by a human.

Re:I guess Isaac Asimov missed one... (1)

John Hasler (414242) | more than 4 years ago | (#30963826)

Simpler: "Don't break anything unless someone tells you to."

Re:I guess Isaac Asimov missed one... (0)

Anonymous Coward | more than 4 years ago | (#30964586)

That is weakly implied by the 1st law & 3rd laws, because a robot should suppose that one day a human or itself may be in danger and require assistant from the robot it is about to harm. The interplay of the 3 laws is great, it's the sort of thing you could write books about !

Evolving robots? (0)

Anonymous Coward | more than 4 years ago | (#30963596)

Clearly, these mechanical creatures were designed by a higher intelligence.

Confirms? (1, Insightful)

Anonymous Coward | more than 4 years ago | (#30963626)

This in no way confirms that it would be too difficult for humans to build robots that posses higher A.I. traits, nor does it confirm that evolution is a better process than intelligent design.

But the real question is... (0, Offtopic)

Duncan J Murray (1678632) | more than 4 years ago | (#30963662)

do they believe in God?

Re:But the real question is... (1)

Duncan J Murray (1678632) | more than 4 years ago | (#30963790)

Or, if they were to become intelligent enough to understand how they evolved, would that disprove God?

Re:But the real question is... (1)

ae1294 (1547521) | more than 4 years ago | (#30963836)

do they believe in God?

Yes but he's a robot that lives in the sky and made them in his own image...

They must be Muslim. (0)

Anonymous Coward | more than 4 years ago | (#30963752)

" the robots were able to orientate" ... Neat. Wow. Did they have an internal compass? Orientate means to "face east", specifically, toward Mecca.

Re:They must be Muslim. (1)

siride (974284) | more than 4 years ago | (#30964446)

Not anymore it doesn't.

Well, that's one definition. (1)

gbutler69 (910166) | more than 4 years ago | (#30964494)

orientate
v : determine one's position with reference to another point
[syn: orient] [ant: disorient]

I know it's blasphemy but... (1)

Xinvoker (1660417) | more than 4 years ago | (#30963792)

worth RTF'ing for a better idea of how this is done (btw, they are the same robots that were taught to "deceive" other robots about where the "food" is). Plus, the video of the Predator/Prey stalemate is just epic! As for the 3rd video (maze navigation), man, i would have blown these 1st gen robots to pieces before they could say Darwin!

Robot Singularity (1)

Oceanplexian (807998) | more than 4 years ago | (#30963850)

I've always thought that "Real AI" wasn't something we could design, but would need to evolve to the point of intelligence. We already know it works, it's just a matter of application.

What if this was allowed to span not 50, but 50,000 or 50,000,000 generations?
Now imagine all the time it took us to evolve in that capacity and do it in the span of a few minutes.

I think the ability to have AI is already solved by today's hardware; we just need the right kind of software.

Crossover (4, Informative)

Dachannien (617929) | more than 4 years ago | (#30963942)

Definitely an interesting continuation of work being done by various groups over the past couple of decades.

But one thing to note is that crossover isn't especially useful in neural network evolution. In early stages of evolution, it's really no better than random large perturbation of large swaths of the genome. In later stages, it can actually decrease the speed of evolution toward high fitness genomes, because at least some of the time (particularly if there are multiple "species" in the population) crossover ends up being a random large perturbation which hinders the search of local fitness space by mutation; the rest of the time (when individuals from the same "species" are crossed) crossover is no better than mutation.

The reason for this is because the parameters of a neural network are not functional. A section of the genome may correspond to a weight between neurons, but that weight doesn't have a specific function. In biological organisms, each gene is transcribed/translated into a protein, and that protein may have a particular function within the cell. If that gene is acquired by a descendant through crossover, the protein could serve the same (or a somewhat modified) role it served in its parent, even if the rest of the descendant's genome was acquired from the other parent. But with artificial neural networks, the parameters were all evolved as parts of a whole, where each individual parameter has no function on its own, but the behavior emerges from having all of those parameters at the same time.

This could potentially be mitigated by the genome encoding scheme one uses, and of course, if the crossover rate is low enough, the ultimate effect would be small.

Re:Crossover (2, Informative)

JoeMerchant (803320) | more than 4 years ago | (#30964914)

Nothing special about neural networks... I achieved similar results [mangocats.com] with a made up scheme of decision weight equations that were "genetically developed" in a big breeding tank.

Basically, behavior that allows greater procreation tends to appear spontaneously, and behavior that cuts procreation short tends to disappear. My "bugs" exhibited a clear shift in behavior to collision avoidance because collisions resulted in death for one of them. I was watching for "sniper bugs" that got good at colliding without getting themselves killed in the process, but I never managed to make the reward high enough for that trait to emerge, probably because there wasn't strong "species differentiation" built in, cross breeding was a matter of choice, and most of the randomly evolved bugs seemed not to be picky about mating, so without species, predators became self defeating.

Re:Crossover (1)

Dragoniz3r (992309) | more than 4 years ago | (#30967242)

Did any of your bugs evolve to the point where they learned during the course of their lifespan, as opposed to genetic learning/memory? What I mean is, did your bugs know not to collide (for instance) because they saw another bug get killed by colliding, or were the genetic markers that predisposed bugs to collide with things removed from the genome over time? I ask this because while robots evolving is neat, I don't see what path it would follow to producing any real "AI" in the sense of something recognizable as being human-like. This sort of thing would seem appropriate to producing hordes of nanorobots that do specific tasks, but not individual nanorobots that would be able to figure out how to do arbitrary tasks.

Re:Crossover (1)

JoeMerchant (803320) | more than 4 years ago | (#30967444)

No, their behavior was programmed at birth, senses were limited to about 60 "floating point channels" which sounds like a lot, but in reality isn't much to observe the world with. They could potentially learn neural net style from painful experiences, but I didn't go that way- a successful program would replicate better than an unsuccessful one, kind of a circular success criteria, but, that's life too.

Correct but... (1)

raftpeople (844215) | more than 4 years ago | (#30965018)

Your point is valid if the genotype to phenotype mapping is a simple mapping to neuron type, connection type, weights, etc. However, we clearly have effective crossover in humans, which means there can be a genotype to phenotype mapping that operates at more of a functional level. It's an interesting and difficult problem.

Re:Crossover (1)

Cthefuture (665326) | more than 4 years ago | (#30965736)

Are they actually genetically evolving a traditional neural network though? The article made it sound that way but it was light on details. I know the words they were using but I don't know if the author knew what they meant.

There is a possibility they are just using traditional genetic algorithm stuff where the "neurons" actually represent programming logic and not just simple weight values like what a "neural network" typically is.

I am curious as to the exact methods they are using if anyone knows.

How to Survive a Robot Uprising (1)

ISoldat53 (977164) | more than 4 years ago | (#30963958)

DIY manual by Daniel H. Wilson on how to survive the coming uprising.

get a emp gun! (1)

Joe The Dragon (967727) | more than 4 years ago | (#30965082)

get a emp gun!

Re:How to Survive a Robot Uprising (0)

Anonymous Coward | more than 4 years ago | (#30966014)

Using a camera's flashlight of course!

The word is "orient", not "orientate" (4, Informative)

shking (125052) | more than 4 years ago | (#30963966)

The noun "orientation [reference.com] " is derived from the verb "orient [reference.com] ", not the other way around.

Re:The word is "orient", not "orientate" (1)

siride (974284) | more than 4 years ago | (#30964520)

What's wrong with deriving a new noun from the verb? It may be redundant at this point for this word (or they may have a more specific meaning in mind than the general "orient"), but it's hardly unprecedented.

Re:The word is "orient", not "orientate" (1)

NeutronCowboy (896098) | more than 4 years ago | (#30965528)

Because it's confusing as hell when people think they invented a new word, but all they did was assign new meaning to an already existing word. If you slept through English class, it might be useful, but to others who know current rules of grammar, spelling and vocabulary, it's just confusing and a sign of ignorance.

Re:The word is "orient", not "orientate" (1)

siride (974284) | more than 4 years ago | (#30965568)

I wasn't confused. You clearly weren't confused because you quickly were able to figure out the original. The meaning is quite clear. At worst, one might ask why not use "orient", but otherwise, the meaning is perfectly clear, or perhaps more precise.

Of course, "orient" as a verb is actually no better, because it was originally noun and was then "verbed". I'm sure if you had been around in those days, you would have pulled the same annoying pedantry out of your ass.

By the way, this is not a grammar, spelling or rules issue. It's one of vocabulary and I think it's fair to say that people should have a good deal of flexibility there.

Re:The word is "orient", not "orientate" (1)

benjamindees (441808) | more than 4 years ago | (#30966610)

Good god, man. Don't you realize what you're proposing? Next thing you know someone will coin a new verb, "orientatation" from your new noun. And from there we're just another smart-ass /.'er away from getting another new noun, "orientatatate". I think you see where I'm going with this. Smug linguists would take over, innovating a cascade of new words that would fill the English language, only to eventually collapse into a recursive singularity of hypothetical new words. Spell-checkers would all overflow in endless loops. People who couldn't handle it would get to about the third or fourth "tatatata" and then pass out. The ones who could would eventually end up speaking one of those "click" languages. Moisture-vaporator translator bots would become obsolete. All because you couldn't follow the rules.

Re:The word is "orient", not "orientate" (1)

siride (974284) | more than 4 years ago | (#30966662)

Sorry. I didn't realize the dangers of my line of thinking. I shall be ever so careful in the future. Ever so.

Re:The word is "orient", not "orientate" (1)

Facegarden (967477) | more than 4 years ago | (#30966136)

The noun "orientation [reference.com] " is derived from the verb "orient [reference.com] ", not the other way around.

Thanks god someone mentioned that! I absolutely hate when people use that "word". It's just... wrong. It's like when people say "funner". Who cares if you know what the person meant, they're still butchering English and if they're a native speaker, that's just ridiculous.
-Taylor

Re:The word is "orient", not "orientate" (0)

Anonymous Coward | more than 4 years ago | (#30966680)

Nevertheless, I was able to interpretate the summary easily enough.

Re:The word is "orient", not "orientate" (1)

quaith (743256) | more than 4 years ago | (#30966792)

You're correct. I wasn't trying to invent a new word. Should have used "orient". Just sloppy editing on my part -- I started with a sentence that had "orientation" in it and shortened it to "orientate" while I was reworking it. Sloppy, very sloppy.

Re:The word is "orient", not "orientate" (0)

Anonymous Coward | more than 4 years ago | (#30967774)

According to Oxford, the word orientate exists and is equivalent to orient. http://www.askoxford.com/concise_oed/orientate?view=uk

Wiktionary gives a description, discussing that it is British but not American English: http://en.wiktionary.org/wiki/orientate#English

Your own link claims that "orientation" is derived from "orientate" which is in turn derived from "orient".

So, I'm not sure what you are complaining about.

Reminds me of the Mall (2, Funny)

Herkum01 (592704) | more than 4 years ago | (#30964002)

The predator and prey bots reminds me of sales people chasing around after anyone who wanders too closely while they try their sales pitch.

Re:Reminds me of the Mall (1)

drinkypoo (153816) | more than 4 years ago | (#30967292)

Just blew off a wannabe guide in Bocas city in favor of one who was a little more relaxed. So far I've bought him a beer, there's been no hard sell. The places he took us (bar, hotel, bar with strong drinks) have all undercharged us compared to the menu.

Obviously, you can develop a resistance to the hard sell, and it can pay off.

So what's new? (5, Informative)

DerekLyons (302214) | more than 4 years ago | (#30964038)

This kind of behavior was first demonstrated/modeled (AFAIK/IIRC) as part of the Tierra [ou.edu] simulations almost twenty years ago. Though I don't have a reference to hand, I know it's been done in neural networks before too.
 
So other than the 'sizzle' (as opposed to 'steak') of doing it with robots, can anyone explain what is new here?

Error in summary (1)

mizaru (1715754) | more than 4 years ago | (#30964132)

According to TFA, the robots were controlled by neural networks, not the selection process.

Second Variety (1)

DeadPixels (1391907) | more than 4 years ago | (#30964208)

Anyone else reminded of that Philip K Dick story "Second Variety" [wikipedia.org] ?

Spoiler for the story - since it's basically the ending - but the point in question:

As the Tasso models approach, Hendricks notices the bombs clipped to their belts, and recalls that first Tasso used one to destroy other claws. At his end, Hendricks is vaguely comforted by the thought that the claws are designing, developing, and producing weapons meant for killing other claws.

Robo-shark! (1)

Xinvoker (1660417) | more than 4 years ago | (#30964242)

It's worth noting that the robot in the experiment where they evolved the bodies as well, in order to run faster, looks like a shark, with a tail and 2 fins. Fascinating. If they could do the same experiment with the ability to walk instead of only crawling (see video http://www.plosbiology.org/article/fetchSingleRepresentation.action?uri=info:doi/10.1371/journal.pbio.1000292.s006 [plosbiology.org] to see what i'm talking about ) and make them do something that requires hands such as lifting something up, we could see if the optimal forms were humanoid, centaur-like, spider-like etc.

Not really learning... (0)

Anonymous Coward | more than 4 years ago | (#30964322)

... everything about the experiments is setup and designed, no real world intelligence evolved like this. This is more like being in control of the weather, causing it to snow, laying the snow just right, then rolling up a big enough ball of snow and letting it down the side of a mountain.

1993 (4, Informative)

Baldrson (78598) | more than 4 years ago | (#30964408)

The video was copyright 1993.

You don't need physical robots running around a maze to demonstrate AI.

Re:1993 (1)

JoeMerchant (803320) | more than 4 years ago | (#30964926)

Yeah, but it makes cool footage to put on the 6 o'clock news. Geeks doing stuff on computer screens is one thing, but when they've got tangible toys they're much more accessible.

As If Asimov Wrote Childhood's End (0)

Anonymous Coward | more than 4 years ago | (#30964428)

A new meaning to next....

Controlled by neural net? (1)

zippthorne (748122) | more than 4 years ago | (#30964442)

Surely the robots were themselves controlled by neural nets which were selected by Genetic Algorithm, rather than using a neural net to control the selection process itself. Perhaps if I RTFA...

No confirmation (1, Insightful)

Lije Baley (88936) | more than 4 years ago | (#30964516)

This doesn't "confirm" anything about Turing's offhanded opinion.

Have Them Spend More Time With Humans (2, Interesting)

Nom du Keyboard (633989) | more than 4 years ago | (#30964982)

If you want to do this right then have your robots spend more time with humans than other robots. This way we can evolve a robot who plays well with other people, than with other robots.

That's what I'm sure my favorite robot SF authors -- Elf Sternberg and D.B. Story -- have planned for their robots. I would love to meet either of their creations.

similar idea for genetic algorithms (1)

AlgorithMan (937244) | more than 4 years ago | (#30965044)

I thought you might have male and female algorithms for some optimization problems "walking around" in a virtual world, mate to create combined algorithms, giving you some kind of blood-relationship between the algorithms (and yes, they should die after some time). the related algorithms would form "clans" and if males of opposing clans meet, they fight over each others ressources (RAM and CPU-Cycles) and there should be sources of these ressources, which should dry out over time, so the algorithms HAVE to migrate and attack each other to get new ressources... and fighting should drain some ressources (but having more ressources should give you an advantage in a fight, like being allowed to run longer or use more RAM)
having more ressources should make the male algorithms more attractive to the female algorithms and there should be some different kinds of "agressiveness"...

this is exactly how nature evolved our brains - this should really work well for genetic algorithms...

ultimately when one algorithm owns all the ressources, he's "the result"...

I only fear that the algorithms might develop some p*ssy features like compassion, culture, science, sharing, etc. This would make the results weak and useless! I'll have to hard-code religious fundamentalists, rabble-rousers, RIAA lawyers and republicans! And I have to give different clans different commandments so they can fight about who's commandments are the commandments by the one, true GOD!!! MUAHAHAHA

and god damn, I'll have to give a name to the ressources... I might call them spice... or oil...

Re:similar idea for genetic algorithms (1)

cptnapalm (120276) | more than 4 years ago | (#30965246)

That would make the world's most awesome screen saver. I mean, if you are going to burn the CPU cycles on a screen saver, might as well do it on something you might enjoy watching.

Oblig. (0)

Anonymous Coward | more than 4 years ago | (#30965334)

Evolving robots learn to prey on each other.

They're calling them [insert despised political party here].

Spell It (1)

b4upoo (166390) | more than 4 years ago | (#30965438)

Robots might orient themselves but orientating themselves must involve eating potatos while finding their directions.

How *fast* do these things evolve, and uh... (0)

Anonymous Coward | more than 4 years ago | (#30966530)

Are they connected to the internet?

Just asking, cause I might need to start escaping now, you know?

A simulation I developed around 1987... (4, Insightful)

Paul Fernhout (109597) | more than 4 years ago | (#30967644)

A simulation I developed around 1987 had 2D robots that duplicated themselves from a sea of parts. They would build themselves up and then cut themselves apart to make two copies. To my knowledge, it was the first 2D simulation of self-replicating robots from a sea of parts. The first time it worked, one robot started canibalizing the other to build itself up again. I had to add a sense of "smell" to stop robots from taking parts from their offspring. As another poster referenced, Philip K. Dick's point on identity in 1953 was very prescient:
    http://en.wikipedia.org/wiki/Second_Variety [wikipedia.org]
"Dick said of the story: "My grand theme -- who is human and who only appears (masquerading) as human? -- emerges most fully. Unless we can individually and collectively be certain of the answer to this question, we face what is, in my view, the most serious problem possible. Without answering it adequately, we cannot even be certain of our own selves. I cannot even know myself, let alone you. So I keep working on this theme; to me nothing is as important a question. And the answer comes very hard.""

However, those robots were not evolving. I presented a talk on that simulation at a workshop on AI and Simulation in 1988 in Minnesota, saying how hard easy it was to make robots that were destructive, but how much harder it would be to make them cooperative. A major from DARPA literally patted me on the back and told me to "keep up the good work". To his credit, I'm not sure which aspect (destructive or cooperative) he was talking about working on. :-) But I left that field around that time for several reasons (including concerns about military funding and use of this stuff, but also that it seemed like we knew enough to destroy ourselves with this stuff but not enough to make it something wonderful). At the same workshop someone presented something on a simulation of organisms with neural networks that learned different behaviors. A professor I took a course from at SUNY Stony Brook has done some interesting stuff on evolution and communications with simple organisms:
    http://www.stonybrook.edu/philosophy//faculty/pgrim/pgrim_publications.html [stonybrook.edu]
Anyway, in the quarter century almost since then, what I have learned is that the greatest challenge of the 21st century is the tools of abundance like self-replicating robots (or nanotech, biotech, nuclear energy, networking, bureaucracy, and others things) in the hands of those still preoccupied with fighting over percieved scarcity, or worse, creating artificial scarcity. What could be more ironic than using nuclear missiles to fight over Earthly oil fields, when the same sorts of techology and organizations could let us build space habitats and big renewable energy complexes (or nuclear power too). What is more ironic than building killer robots to enforce social norms related to forcing people to sell their labor doing repetitive work in order to gain the right to consume, rather than just build robots to do the work? Anyway, it won't be the robots that kill us off. It will be the unexamined irony. :-)
   

Evolved Neural Network Brains (1)

physburn (1095481) | more than 4 years ago | (#30967750)

I've used the same programming mechanism and it works but its not learning or anything close. They create a neural network for each robot brain, then wipe the brain if it doesn't work well enough, and breed from the ones that work well. The population of robots learn by evolution, but each individual one, can't learn at all. Real animal and people of course can learn, and learn well, in there own lifetimes. So this learn mechanism is far inferer to natural brains.

---

Robotics [feeddistiller.com] Feed @ Feed Distiller [feeddistiller.com]

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?