Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Swiss Experimenter Breeds Swarm Intelligence

kdawson posted more than 4 years ago | from the what-hath-ai-wroght dept.

Robotics 144

destinyland writes "Researchers simulated evolution with multiple generations of food-seeking robots in a new study of artificial swarm intelligence. 'Under some conditions, sophisticated communication evolved,' says one researcher. And in a more recent study, the swarms of bots didn't just evolve cooperative strategies — they also evolved the ability to deceive. ('Forget zombies,' joked one commenter. 'This is the real threat.') 'The study of artificial swarm intelligence provides insight into the nature of intelligence in general, and offers an interesting perspective on the nature of Darwinian selection, competition, and cooperation.' And there's also some cool video of the bots in action."

cancel ×

144 comments

Of all the countries... (4, Funny)

Kenja (541830) | more than 4 years ago | (#29874629)

Of all the Nations, I would never have thought it would be the Swiss who would start the robot apocalypse. I had Germany in my betting pull...

Re:Of all the countries... (4, Funny)

Starayo (989319) | more than 4 years ago | (#29874723)

Never trusted 'em, myself, what with their knives, and their cheeses.

Re:Of all the countries... (3, Funny)

commodoresloat (172735) | more than 4 years ago | (#29875845)

yeah but those clocks... you can set your watch by them!

Re:Of all the countries... (3, Funny)

oldspewey (1303305) | more than 4 years ago | (#29874935)

If history has taught us anything, it's that if you wait long enough, eventually the Germans will come to you.

Re:Of all the countries... (2, Insightful)

Tiger4 (840741) | more than 4 years ago | (#29875997)

I heard an old military intel guy say this about the Germans, "they're either at your feet or at your throat".

Re:Of all the countries... (0)

Anonymous Coward | more than 4 years ago | (#29876869)

The Germans ? My bet was on the good old US of A. Well, you never know, might still happen :p

Re:Of all the countries... (0)

Anonymous Coward | more than 4 years ago | (#29877311)

I'm more concerned that he was able to breed it. Must not be illegal in Switzerland.

Skynet shows its face again (1)

Tiger4 (840741) | more than 4 years ago | (#29874631)

First XKCD [xkcd.com] points out the obvious weapons end of things, now this guy announces how the brains have already been developed.

Re:Skynet shows its face again (1)

56 (527333) | more than 4 years ago | (#29874949)

Just don't try to unplug one of these robots, they'll do whatever it takes to get that sweet sweet electricity...

Re:Skynet shows its face again (1)

nobodylocalhost (1343981) | more than 4 years ago | (#29875211)

What makes you think life as we know it isn't nano swarm intelligence gone terribly wrong?

Re:Skynet shows its face again (1)

Tiger4 (840741) | more than 4 years ago | (#29875829)

What makes you think life as we know it isn't nano swarm intelligence gone terribly wrong?

It has gone terribly wrong. But it isn't nano. One look at Rosie O'Donnell will tell you that. Intelligence is open to debate for similar reasons.

to counteract (5, Funny)

Keruo (771880) | more than 4 years ago | (#29874633)

To counteract his theories about swarm-intelligence, I sent the researcher link to 4chan.

Re:to counteract (1)

Kratisto (1080113) | more than 4 years ago | (#29875081)

That'll never work on machines. Not unless we build them with genitals.

Ah, but that's too short timeframe (0)

Anonymous Coward | more than 4 years ago | (#29875153)

You should see what the next generation looks like, based on how many of the anonymous get to reproduce.

I think that 4chan would be the equivalent of "poison" in the tests.

Re:to counteract (5, Funny)

snarfies (115214) | more than 4 years ago | (#29875369)

Actually, not a bad way to poison a bot's intelligence. Its been done before, with hilarious results (well, if you like 4chan-style humor), with a chatbot called Bucket. It was designed to pick up the basics of the English language and conversation techniques from random internet users.

Then 4chan found it.

http://encyclopediadramatica.com/Bucket [encycloped...matica.com] has the full story, along with quotes and screenshots.

Re:to counteract (1)

plasmacutter (901737) | more than 4 years ago | (#29875705)

you really can't balme 4chan for this.

If you go to the forums [jonanin.com] you will notice there are no capcha systems or even basic registration systems to allow proper bans to be imposed.

Every thread is quickly found by a forum spam bot and filled to the maximum page length with spam links.

This is a failure of basic security.

Re:to counteract (1)

MonsterTrimble (1205334) | more than 4 years ago | (#29877529)

Seriously, that link is brutal and quite unreadable.

Although could we have Bucket run through slashdot?

slashdot! (1)

Thornburg (264444) | more than 4 years ago | (#29874649)

In other news, an experiment by SourceForge, using it's meatspace zombienet "Slashdot" proved that even Google-owned YouTube can be brought to it's knees by enough people trying to watch the same video at the same time.

I, for one, welcome... (0)

Anonymous Coward | more than 4 years ago | (#29874651)

oh, too obvious?
Well then, in Soviet Russia, robot overlords welcome You!

replicators? (0)

MoFoQ (584566) | more than 4 years ago | (#29874667)

is this the beginning of replicators (from the Stargate universe)?

Re:replicators? (1)

Hybrid-brain (1478551) | more than 4 years ago | (#29875191)

Theoretically?

Re:replicators? (1)

db32 (862117) | more than 4 years ago | (#29875753)

Stargate story. Stargate world. Stargate creation. You can call it anything you want other than Stargate Universe. That show is TERRIBLE.

Didn't they read Prey!? (1)

Kirin Fenrir (1001780) | more than 4 years ago | (#29874695)

"Prey" is a pretty good scifi novel about this. It follows the tired cautionary-tale forumla, but like all of Crichton's novels has (some) basis in real research.

Re:Didn't they read Prey!? (1)

Loomismeister (1589505) | more than 4 years ago | (#29874831)

It doesn't follow a cautionary-tale formula, that is merely one of the many points that the book is following. I found it very refreshing and thought provoking rather than "tired". Also, these are big robots that researchers are using when compared to the nano sized particles that "Prey" used.

Re:Didn't they read Prey!? (0)

Anonymous Coward | more than 4 years ago | (#29874835)

They didn't look very nano to me.

Re:Didn't they read Prey!? (1)

Kirin Fenrir (1001780) | more than 4 years ago | (#29874887)

But that's how it starts!

...okay, so maybe I didn't read the article yet.

Re:Didn't they read Prey!? (1)

savuporo (658486) | more than 4 years ago | (#29877481)

By sci-fi standards, its a pretty bad cliche-ridden paperback. Nothing novel or interesting. ( Yes, i was in a small airport with really limited bookshelves )

Re:Didn't they read Prey!? (1)

Valdrax (32670) | more than 4 years ago | (#29877603)

"Prey" is a pretty good scifi novel about this. It follows the tired cautionary-tale forumla, but like all of Crichton's novels has (some) basis in real research.

Not it's not; the formula is just scaremongering; and it's about as based in real research as Congo's gorilla hybrids, as Andromeda Strain's magical, energy-eating, crystal viruses, as Jurassic Park's spontaneous evolution of lysine synthesis genes in less generations than you can count on one hand, as State of Fear's wide-eyed acceptance of junk science that challenges the "religion" of global warming, and as Sphere's... whatever the f--- Sphere was supposed to be.

Crichton is a hack that you stop being impressed by once you're out of middle/high school. He can't write an ending to save his life, and the science in his stories is an interesting backdrop for stories that ultimately subvert or ignore science to create dramatic tension and/or provide an escape hatch to the situation.

Also, there's absolutely nothing to fear from these robots as they don't actually eat anything. They just seek out objects marked as "food" and avoid others marked as "poison."

Russian Science Fiction (1)

gmuslera (3436) | more than 4 years ago | (#29874713)

"Crabs Take Over the Island" by Anatoly Dnieprov is somewhat based on the same idea, not in that swarm scale, but scary anyway.

Hullabaloooo (0)

jasno (124830) | more than 4 years ago | (#29874755)

Cool - but why use real robots for this? Seems like you'd be better off creating virtual robots in a simulated environment to develop the algorithms for something like this. You don't have to worry about dead batteries and hardware failures, and your simulations can run faster than real-time.

Then again, maybe that's what the researcher did, and we're just seeing the end product applied to real robots.

Re:Hullabaloooo (2, Insightful)

oldspewey (1303305) | more than 4 years ago | (#29874965)

Actual robots with flashing lights have a way better chance at going viral on YouTube.

Re:Hullabaloooo (1)

Chris Burke (6130) | more than 4 years ago | (#29875055)

From TFA: "First simulated in software before using actual bots, five hundred generations were evolved this way with different selective pressures by roboticists and biologists at the Ecole Polytechnique Fédérale de Lausanne in Switzerland in 2007."

So yes, that's exactly what they did.

Also, I'm sure this is at least a rehash of a previous /. article, because I remember discussing the deceptive behavior with the light-flashing. It's still interesting.

Re:Hullabaloooo (1)

Ardaen (1099611) | more than 4 years ago | (#29875065)

Don't worry, I didn't read the article either.
I did however do a text search and came across this line: "First simulated in software before using actual bots"

Real hardware is more information rich (3, Interesting)

Weaselmancer (533834) | more than 4 years ago | (#29875105)

Real hardware can hold more states than a purely digital system.

I remember reading a paper (can't find it now though - darn it) about a guy who was doing neural net research with Xilinx chips. Same idea. Whenever an algorithm would do well he'd break it into "genomes" and pair them off with other successful programs.

The board was a bank of Xilinx chips, the genomes were the programming files (basically 1s and 0s fed into the configuration matrix), and the goal was to get the thing to turn on and off when you would speak "on" and "off" into a microphone.

It eventually started working. More interesting than that is what happened when he loaded the program into another board. It didn't work.

It turns out the algorithm had evolved to take advantage of the analog properties of the specific chips in that particular board. The algorithm didn't see the board as a digital thing. It saw it as a collection of opamps, amplifiers, and other analog parts. Move the program to a board that is identical digitally, and it failed because the chips weren't analog exact. You wouldn't have seen that behavior in a purely digital simulation.

Re:Real hardware is more information rich (4, Interesting)

Chris Burke (6130) | more than 4 years ago | (#29875249)

It turns out the algorithm had evolved to take advantage of the analog properties of the specific chips in that particular board. The algorithm didn't see the board as a digital thing. It saw it as a collection of opamps, amplifiers, and other analog parts. Move the program to a board that is identical digitally, and it failed because the chips weren't analog exact. You wouldn't have seen that behavior in a purely digital simulation.

Yeah, I remember that, but differently (or maybe it's a similar but different incident). What I recall is that he looked at the working design, and saw that it included a section that wasn't connected to anything else. Thinking this was just random waste, he removed it. Then it stopped working. Capacitive and inductive effects from the 'disconnected' section was affecting the main 'working' section and making a complicated analog circuit.

In either case (and both are certainly possible outcomes), this outlines what is so awesome about Genetic Algorithms and the natural evolution that inspires them -- no preconceived notions about what the solution should look like. Whatever works, works, and that's literally all that matters. Us humans very often start with a picture in mind of what the answer "should" be, and it limits our thinking. On the other hand, a lot of times we have those preconceived notions like "this circuit should be digital not analog" for very good reasons, and we simply fail to notify the GA of that requirement. Which also makes GAs fun. :)

Re:Real hardware is more information rich (1, Insightful)

Anonymous Coward | more than 4 years ago | (#29877511)

Us humans very often start with a picture in mind of what the answer "should" be, and it limits our thinking.

Like a future with humans in it, you mean?

Re:Real hardware is more information rich (1)

Chris Burke (6130) | more than 4 years ago | (#29877749)

Like a future with humans in it, you mean?

Indeed! And it's exactly that kind of unspoken assumption that can make/break an algorithm. Who knows -- maybe the only problem with Skynet was that somebody forgot to write a rule that wiping out humanity was not an acceptable solution to human conflict.

Re:Real hardware is more information rich (0)

Anonymous Coward | more than 4 years ago | (#29875865)

I remember reading a paper (can't find it now though - darn it) about a guy who was doing neural net research with Xilinx chips

I believe you're talking about Adrian Thompson's [sussex.ac.uk] paper An evolved circuit intrinsic in silicon entwined with physics. [sussex.ac.uk] .

Re:Real hardware is more information rich (1)

JerryLove (1158461) | more than 4 years ago | (#29876611)

You would if you virtualized those analog parts.

That's what most virtual modeling does, whether it's stress analysis in AutoCAD or reproducing that "tube warmth" from a solid-state amplifier through massaging the wave.

More on that (1)

fyngyrz (762201) | more than 4 years ago | (#29877643)

You would if you virtualized those analog parts.

Yes, you would, but you'd also take a hit to simulation throughput, I'm guessing a pretty significant one, too. I'm not sure you'd gain anything specifically more useful than you would in a pure digital approach without this kind of low level detail, either. More interesting to add something a bit more "macro" in the sense that it's a high level behavior / feature you can see and evaluate by simple observation.

My qualifications to guess? I'm the author of "Digital Soup", a system that uses genetic algorithms to drive robotic actors ("crits") in a world where simulated food gives them energy, they can trip over simulated rocks which takes energy from them, they can bump into each other -- similar to tripping on a rock -- they lose energy in the search for food, they can mate, breed, and so forth.

You have control over breeding selection criteria, in that they (the crits) can pick from other crits that are high performing, or one of them can be, or neither, or it can be random. You can also fiddle with how the genomes of the breeders mix. You can hand-write the genomes, or preload from saved before you start, or you can generate them randomly.

Once they're let go, performance is evaluated on a per-crit basis using a histogram for the currently living crits that evaluates their energy state. You have control over the size of the living space, the number of crits... and more. You can actually watch them run around, chase food and each other, avoid rocks, etc. Generation times can be exceedingly fast. Built-in hypertext docs/help. It is really a pretty cool program.

Also... if you write a genome yourself, one you think will do well, the result is very rarely what you expect. It's fascinating to mess with.

You can get the software for free here [datapipe-b...ystems.com] , but there is a pretty big catch for most people: It's Amiga software. I wrote all this over a decade ago. It probably would run fine on any decent Amiga emulation, though, as there's nothing "funny" about the code or the resources used.

Hyperbolic Claims... what's behind the curtain? (0, Troll)

virmaior (1186271) | more than 4 years ago | (#29874759)

conspicuously absent is any explanation of what is meant by "learned" in this context or how the algorithms "evolved"

Re:Hyperbolic Claims... what's behind the curtain? (1)

Timothy Brownawell (627747) | more than 4 years ago | (#29874893)

conspicuously absent is any explanation of what is meant by "learned" in this context or how the algorithms "evolved"

Um, it's in the first couple paragraphs of the article. You did read that, yes?

Re:Hyperbolic Claims... what's behind the curtain? (2, Insightful)

virmaior (1186271) | more than 4 years ago | (#29874955)

I did. Did you understand the meaning behind the paragraphs you cite?
I am confused as to how it was possible to understand the claims it made. Robots don't have genomes and don't eat food. Their "genomes" cannot be recombined.
Robots have code and programmers. What does a random change mean in this context? Do they use faulty dram or mess with the voltage?

Re:Hyperbolic Claims... what's behind the curtain? (1)

doti (966971) | more than 4 years ago | (#29875059)

Well, I did not read the article, but maybe they're talking about genetic algorithms?

I think it's just some parameters (0)

Anonymous Coward | more than 4 years ago | (#29875379)

Whether you should go towards certain colours of light or away from them, how soon you should do this, how important this is compared to other factors, whether you should flash your lights then or not... That kind of things. Predetermined methods with different parameter values for each robot. And combining propably means taking average values of a set of robots.

I agree that it would be nice if we would know more about the mechanics (what the methods and algorithms were, their initial values and that sort of things). It is a shame that so many news sources try to explain things in layman terms. The end result is that laymen still won't understand but those who would have the potential to understand don't get enough information.

Re:Hyperbolic Claims... what's behind the curtain? (1)

Timothy Brownawell (627747) | more than 4 years ago | (#29875449)

Each bot had an initial built-in attraction to a “food” object, aversion to a “poison” object, and a randomly-generated set of parameters –- their “genomes” –- to define the way they move, process sensory information, and flash their blue lights.

The "food" is just something that they're supposed to move towards. I have heard of similar (hobbyist) setups where it's actually a charging station so the "food" aspect is more literal, but all that's necessary is that something keeps score to see which bots found it (and should be used for the next round) and which ones didn't (and should be discarded).

The "genomes" are something like config files for their programming.

To create a next generation bot, traits were combined and randomized to mimic biological mating and mutation.

New bots got config files made from random (well, probably pseudorandom, but that's just as good in this context) sections taken out of the config files of earlier bots, probably with some small bits completely randomized.

(Note that "config file" here is based on the reporter calling it a set of parameters. I'd imagine it could be anything from a normal config file to a script in some appropriate high-level language.)

There's what seems like a decent overview of how these things work in general over here [wikipedia.org] .

Re:Hyperbolic Claims... what's behind the curtain? (1)

virmaior (1186271) | more than 4 years ago | (#29875543)

it sounds like you are now accepting that the article is misleading.

You're agreeing that the terms are not what a normal reader would construe them to mean.

if the experiment wants to show anything, the methodology has to be more transparent so that we can know whether to consider its "genome" as really a genome or are something more banal.

if the semi-random is really just someone going through and changing parameters in a config file (or using a script to do it), then it's not really random at all.

here's a url that helps make sense of the difference: same site [wikipedia.org] [wikipedia.org]

Re:Hyperbolic Claims... what's behind the curtain? (2, Informative)

Timothy Brownawell (627747) | more than 4 years ago | (#29875817)

it sounds like you are now accepting that the article is misleading.

You're agreeing that the terms are not what a normal reader would construe them to mean.

The terms are a (very good) metaphor, and the article is not at all misleading. I would have thought this would be obvious.

if the experiment wants to show anything, the methodology has to be more transparent so that we can know whether to consider its "genome" as really a genome or are something more banal.

The entire point of this sort of research is that the "genome" in the bots is analogous to, but far simpler than, a biological genome, and the means of selecting which "genomes" to generate the next "generation" from is analogous to how genomes are selected in biology (either "natural selection" like you find in nature or "artificial selection" like you get with farmed crops or dog breeding).

In what way is it not transparent?

if the semi-random is really just someone going through and changing parameters in a config file (or using a script to do it), then it's not really random at all.

Believe it or not, computers actually can generate effectively random numbers [wikipedia.org] .

Re:Hyperbolic Claims... what's behind the curtain? (1)

virmaior (1186271) | more than 4 years ago | (#29875941)

You seem to be confused... The very thing that is unclear is the aptness of the analogy, and the very fault of the article is to perpetuate it without justifying.

The terms are a (very good) metaphor, and the article is not at all misleading. I would have thought this would be obvious.

unless you're the author of the underlying study, I am unclear as to how you have knowledge of the methods and science behind what they are doing.

I would have that this would be obvious

The entire point of this sort of research is that the "genome" in the bots is analogous to, but far simpler than, a biological genome, and the means of selecting which "genomes" to generate the next "generation" from is analogous to how genomes are selected in biology (either "natural selection" like you find in nature or "artificial selection" like you get with farmed crops or dog breeding).

the entire failing is that it's not clear that the simplified model in any way duplicates the more complicated model.

oddly, when you simplify something, you often bludgeon the very thing that makes it what it is. What has made genetics so interesting is that the pathways of inheritance and gene expression are more complicated than each model we devise.

So without knowledge of the senses in which this is reflective of a "genome" to call it so is misleading.

In what way is it not transparent?

see above. The opacity is the validity of the comparison not the use of the comparison.

Believe it or not, computers actually can generate effectively random numbers [wikipedia.org] .

Believe it or not, the article makes no mention of this and does not indicate how the randomization was effected.

oddly that failing is precisely what i questioned to begin with ... believe it or not.

...

in summary, while you have marshaled an interesting array of wikipedia articles, the original article in question remains a piece of hype-mongering.

it has in no way connected itself to any of what you have stated.

instead, it has merely used (or possibly abused) the terms of biology to describe what might otherwise be a rather boring high school science fair experiment.

Genetic Algorithms (2, Informative)

Chris Burke (6130) | more than 4 years ago | (#29875453)

I did. Did you understand the meaning behind the paragraphs you cite?

I did! Whenever you hear about a robot/AI "evolving", you should immediately think Genetic Algorithms [wikipedia.org] . Most frequently in association with Neural Networks [wikipedia.org] as is the case here (mentioned lower in TFA).

I am confused as to how it was possible to understand the claims it made. Robots don't have genomes and don't eat food. Their "genomes" cannot be recombined. Robots have code and programmers. What does a random change mean in this context? Do they use faulty dram or mess with the voltage?

Robots have code. That code can be (is) represented as a series of bits. That series of 1s and 0s can be considered the "genome" of the robot. And can you randomly change bits, or recombine portions of bit patterns? Absolutely. When you have a "population" of such bit patterns which you test out, then take the best ones and copy them, randomly mutate them, and recombine them together, then test the new population and repeat, you have a Genetic Algorithm.

Now that's possible but time consuming to do with the actual instruction bytes of the AI. In the case of Neural Networks with Genetic Algorithms, the "genome" is actually just the organization of the neural net -- the connections and weights between the neurons.

The robots still have programmers, but the programmer is not directly writing instructions for the AI to follow. Instead, they're writting an interface between the robot's sensors and the neural network, the neural network code itself (not its organization), and the code that does the genome mutation/recombination. From there, the AIs "evolve" and "learn" on their own and arrive at often fascinating solutions to problems.

For example in this case, the programmers did not teach the robots how to deceive each other. That behavior emerged on its own from the genetic algorithm. So, there really isn't any hyperbole at all in the summary/TFA. Though it would be helpful if they at least mentioned genetic algorithms so skeptical people have something to google. :)

Re:Genetic Algorithms (1)

virmaior (1186271) | more than 4 years ago | (#29875493)

oh, so you're saying we just make up new meanings for old terms and act like their the same! oh now i get it! see that's the problem. the robots do not deceive; they do not see food. the article is a misconstrual of what is happening.

Re:Genetic Algorithms (1)

Timothy Brownawell (627747) | more than 4 years ago | (#29875923)

the robots do not deceive; they do not see food. the article is a misconstrual of what is happening.

Only in the same sense that you are a figment of your own imagination, and any discussion of there being a "you" or "me" is also a misconstrual.

Re:Genetic Algorithms (1)

virmaior (1186271) | more than 4 years ago | (#29876009)

Only in the same sense that you are a figment of your own imagination, and any discussion of there being a "you" or "me" is also a misconstrual.

how so?

the one is clearly a construction that we can fully comprehend because we generated it.

the other has yet to be shown to be merely a construction (whether or not it can ever be shown as such).

maybe to make it more clearly, robots do not survive on the basis of said "food" so it's not the same as our "food" even if both deserve the quotes.

the further difficulty with your claim is that you state "Only in the same sense that you are a figment of your own imagination". But then it seems that we need to endow the robot with imagination before it can really have the same sense.

Re:Genetic Algorithms (2, Informative)

Chris Burke (6130) | more than 4 years ago | (#29875965)

oh, so you're saying we just make up new meanings for old terms and act like their the same! oh now i get it!

Redefine? They're direct analogues of their biological terms. It's not like they're using the terms for something completely different.

If you have a problem with the specification of a bit pattern that defines the robot's behavior as a "genome", state it. The choice to refer to it as such is not simply a made-up meaning. It works perfectly in theory and in practice.

see that's the problem. the robots do not deceive; they do not see food.

They do deceive, and they do see the thing they are rewarded for acquiring.

the article is a misconstrual of what is happening.

Only in the sense that they use a couple terms that may seem odd out of context if you aren't familiar with how they are used in this conetxt. "Evolve", "learn", and "deceive" are all 100% accurate. "Food" is somewhat of a misnomer in that the robots don't eat, but they are rewarded for finding the "food" just like living organisms are both in nature and in the lab. They could have just said "reward item" to be 100% accurate, but "food" gets the idea across perfectly well that it is the thing that the robot is seeking much like a rat in a maze.

I wish you'd be more open minded about this. Genetic algorithms are fascinating, and amazingly successful. They not only are a fantastic aid in solving engineering problems, they also show insights into natural evolution, which is similarly guided by a random changes selected from according to which performs better.

Re:Genetic Algorithms (1)

virmaior (1186271) | more than 4 years ago | (#29876075)

That was actually helpful.

I see your point about the usefulness and prevalence of these analogies.

I just still question how well they fit the biological model.

but i do appreciate your efforts to help me see.

Re:Genetic Algorithms (1)

Chris Burke (6130) | more than 4 years ago | (#29876699)

I see your point about the usefulness and prevalence of these analogies.

Yeah, they are essentially analogies -- how could they be otherwise, when comparing robots to life forms? They're just pretty good ones. :)

I just still question how well they fit the biological model.

It's a fair question.

One thing that may help is to remember that these robots are evolving behaviors comparable to those of very stupid bugs. "Deceive" may seem to imply some intentional act of subterfuge, but for many animals the concept of "intent" isn't very useful. E.g. the harmless king snake almost certainly doesn't "know" that it is kept safe from predators by looking a lot like the deadly coral snake, but nevertheless we call that an example of deceptive markings. It just evolved that way. And for creatures as simple as insects (or these robots), they "learn" new behaviors in much the same way, over generations of evolution, not as a direct response to stimuli.

As for the evolutionary mechanism itself -- well, the same principles apply here*, which is why it's so effective. But as far as actually fitting biology, that's an open question. Aside from obviously differing in the specifics (binary vs base pairs etc), the main way in which I'd say it differs is that the simulated evolution is too restrictive. They probably have chosen a fixed algorithm for how to combine genomes from successful robots, and a fixed mutation rate, and quite possibly even a fixed neural network size, and the genome just maps directly to the neural net structure. Whereas in nature, evolutionary mechanisms compete and evolve just the same as any other aspect of the organisms. For example there are both RNA- and DNA-based organisms, and the sizes and structures of these respective encodings can change. Many plants have evolved mechanisms for undoing mutations and copy errors in "important" sections of their genome. In nature, everything is subject to evolutionary pressure, while these robots have only a small subset (organization of their brain) open to modification.

On the other hand, 500 generations of a small population isn't that long, so while not universally true you could make an assumption that none of those other things would change in that time.

Which, now that I think about it, reminds me of the true biggest difference between this and natural evolution -- humans defining all of the parameters, and all the assumptions. Not just the algorithm for how the robots evolve, but even the criterion by which they are judged. In nature anything that survives to reproduce passes on its genes and that's all that matters. A natural organisms that failed to find its previously defined food source, but which adapted to exploit some other source of food, would be successful. But here, it's a hard-coded fact that the robots survive via "food" and die from "poison". None of the robots could ever evolve to 'eat' the poison and gain a tremendous advantage over the others.

Defining success ends up being the biggest challenge in practical applications of genetic algorithms. You don't want to define "success" too narrowly, or you inadvertently shut out novel solutions to problems. On the other hand, you can't define it too broadly or you'll get something that solved the problem you specified, but not the one you wanted. See above in the discussion about using GAs to write programmable logic chips and ending up with analog circuits which is not desirable for mass production.

Robots, AI, and artificial evolution are all nascent fields. It could be that as we progress all of these artificial limitations may be slowly blurred away until it resembles natural evolution by more than just analogy. Though the obvious question (and fodder for sci-fi) is whether we even want machines to evolve like natural organisms. I'm thinking no. :)

but i do appreciate your efforts to help me see.

I appreciate open-minded critical thinking.

* Principles largely established before the specific mechanisms were even known.

The ability to deceive? (5, Funny)

Thanshin (1188877) | more than 4 years ago | (#29874767)

they also evolved the ability to deceive.

Obviously, once you've proved the entity has the ability to deceive, you must distrust any further results.

Re:The ability to deceive? (1)

allknowingfrog (1661721) | more than 4 years ago | (#29874823)

That's funny, but also a very interesting point.

Re:The ability to deceive? (1)

CharlyFoxtrot (1607527) | more than 4 years ago | (#29875121)

That's funny, but also a very interesting point.

They are "deceiving" each other, not the researchers : " By the 50th generation, some bots eventually learned not to flash their blue light as frequently when they were near the food so they wouldn’t draw the attention of other robots." I don't know if deception is really accurate in this case since to me it suggests intent while that's not the case here. Maybe natural "camouflage" like you see in animals is a better analogy.

Re:The ability to deceive? (2, Funny)

element-o.p. (939033) | more than 4 years ago | (#29876919)

They are "deceiving" each other, not the researchers...

That's just what they want you to think...

Re:The ability to deceive? (2, Insightful)

Hinhule (811436) | more than 4 years ago | (#29875071)

And you'll have to go back to your earlier results and wonder, when did it start deceiving?

Re:The ability to deceive? (2, Informative)

knuckledraegger (910257) | more than 4 years ago | (#29875389)

Lawyers?

Robotic Evolution (3, Interesting)

allknowingfrog (1661721) | more than 4 years ago | (#29874773)

Do I understand this correctly? On top of superhuman strength and intelligence, we're now making steps toward robot evolution? When robots rule the world, do you think they'll debate whether or not they actually evolved from primitive PCs?

"You fool! We were created in our present form by the great nerd in the sky! Shun the non-believer!"

Re:Robotic Evolution (1)

Chris Burke (6130) | more than 4 years ago | (#29875099)

Evolving their AIs, yes, not their physical capabilities. Genetic Algorithms have been in use for AI programming for quite some time now.

On an unrelated topic, have you heard the good news of Robot Jesus?

Re:Robotic Evolution (1)

Saliegh (1368127) | more than 4 years ago | (#29875731)

Yes I have already been saved; both early and often.

Re:Robotic Evolution (1)

Chris Burke (6130) | more than 4 years ago | (#29876013)

Save your soul! Also remember to make regular off-site backups!

Re:Robotic Evolution (1)

allknowingfrog (1661721) | more than 4 years ago | (#29877377)

On a side note, I'd like to mention that you're the oldest member of Slashdot that has graced me with a reply. Slashdot reminds me somewhat of the episode of the Simpsons where Homer joins the "Stonecutters," except with less paddling.

Re:Robotic Evolution (1)

daboochmeister (914039) | more than 4 years ago | (#29875631)

Calm down, if you watch the video, you'll see we can easily outrun them.

Re:Robotic Evolution (1)

Curunir_wolf (588405) | more than 4 years ago | (#29876175)

"You fool! We were created in our present form by the great nerd in the sky! Shun the non-believer!"

Sounds very much like the scenario in "Saturn's Children". All the humans have died off, and only the sentient artificial servants are left. The weird (well, one of them) is that they all have heard of "Evolution", but view it as some crazy old ancient religion that only the simple-minded would believe.

Re:Robotic Evolution (1)

Follier (901079) | more than 4 years ago | (#29877383)

This causes a great deal of confusion, because the name of the great nerd who creates them all is named Shawn the Non-Believer, (AI algorithm researcher and atheist reddit troll).

i think this was covered already... (3, Informative)

fuo (941897) | more than 4 years ago | (#29874941)

Re:i think this was covered already... (0)

Anonymous Coward | more than 4 years ago | (#29877451)

Or in this 2007 article:
http://www.newscientist.com/article/dn11248

I don't know about any of you, but... (0)

Anonymous Coward | more than 4 years ago | (#29875025)

I for one welcome our new multi-generational food-seeking robot-overlords' swarm intelligence.

Forget Zombies? (1)

Chrigi (1581379) | more than 4 years ago | (#29875039)

Of course he was only joking! He knows just as well as we all do, that the outbreak of a Zombie apocalypse is way more likely than his swarm bots eating our brains. Because the robots won't reproduce exponentially by eating your brains, they will have to rely on the superior robotics skills of the zombies to survive.

Putting the cart in front of the horse IMO (1)

xednieht (1117791) | more than 4 years ago | (#29875079)

We have not even realized swarm stupidity yet, how can they claim swarm intelligence?

Re:Putting the cart in front of the horse IMO (1)

shaitand (626655) | more than 4 years ago | (#29877215)

I don't know where you live but around here we realized swarm stupidity a long time ago. Then again, I doubt the swarm has figured it out yet.

Re:Putting the cart in front of the horse IMO (1)

Valdrax (32670) | more than 4 years ago | (#29877721)

We have not even realized swarm stupidity yet, how can they claim swarm intelligence?

"Stupidity" can't exist without intelligence. "Stupidity" is what you call it when one intelligence rates the performance of another intelligence, and it's usually measured against a background of the subject species' average intelligence. (i.e. A "smart dog" is "smart for a dog," not smart compared to a human.)

Until the robot swarm has identifiable intelligence to begin with, there's no more point in claiming stupidity than there is to claim stupidity for an amoeba or a chair. Therefore, it's not putting the cart in front of the horse, because we can't call them stupid until some of them are intelligent first.

I for one... (-1, Redundant)

sajuuk (1371145) | more than 4 years ago | (#29875125)

I for one welcome our new food-seeking, intelligent, virtual overlords.

Anonymous Coward (0)

Anonymous Coward | more than 4 years ago | (#29875229)

their hearts are *truly* klingon!

What does it tell about the intelligent designer? (2, Insightful)

140Mandak262Jamuna (970587) | more than 4 years ago | (#29875431)

Just with the limited human intelligence, limited resources and limited ability the researchers are able to create great levels of cooperation on mindless robots without any free will. Makes me wonder, if we are designed, as many Intelligent Design advocates claim we are, was the designer "intelligent"? With infinite wisdom and omnipotence and infinite resources, the Designer (or Designers) should have been able to create much more cooperative human beings. No wars. all peace. I wonder how they (the IDists) are able to square their ability ti "infer design" with the obvious "deficiencies of design".

Re:What does it tell about the intelligent designe (1)

vlm (69642) | more than 4 years ago | (#29876291)

With infinite wisdom and omnipotence and infinite resources, the Designer (or Designers) should have been able to create much more cooperative human beings. No wars. all peace.

Well, by Norse mythology, Odin, Vili, and Ve created the humans to fight in the final battle of Ragnarok, which wouldn't be much of a battle if humans just sit around all day and post to slashdot. The world is supposed to end in flames, perhaps Ragnarok will be started by a vi vs emacs flamewar on slashdot. Certainly the Norse mythology fits the human condition much more closely than the Christian mythology. Which would imply...

I wonder how they (the IDists) are able to square their ability ti "infer design" with the obvious "deficiencies of design".

If you really want to mess with the heads of IDers, ask them what they'd do if further research showed neither the Christians nor the scientists are correct, and it turns out they're worshiping the wrong gods.

Re:What does it tell about the intelligent designe (0)

Anonymous Coward | more than 4 years ago | (#29876627)

You're assuming that the 'designers' intended to create a cooperative, peaceful world of human beings to begin with. The design may be deficient in achieving your criteria, but you are projecting your own idealistic design upon any 'designers' as evidence that they didn't have the kind of planning and or foresight that ID advocates advocate. All experiments come to an end when some predetermined threshold is crossed, one way or another. To be sure, if we are living in some sort of planned experiment, it probably comes to a rather abrupt end.

As for evolution and cooperation of mindless robots; there would be no evolution or cooperation if the scientists conducting the experiment had not "designed" those abilities into the robots from the start.

Re:What does it tell about the intelligent designe (1)

Libertarian001 (453712) | more than 4 years ago | (#29877271)

soooooo... Destiny vs. free will & self-determination?

Re:What does it tell about the intelligent designe (1)

lrandall (686021) | more than 4 years ago | (#29877461)

Well, I can't speak for all "IDists", but based on my beliefs we are here to learn and progress: that is the whole point of our existence. Progression implies a lack of perfection, hence the wars, lack of cooperation, etc., that you suggest is evidence of a lack of intelligent design.

We are all intelligences in our own right, given the freedom to choose for ourselves and in so doing gain knowledge, experience and indeed greater intelligence. This freedom we are given means our actions can prove to be positive and conducive to progress or detrimental to ourselves or the human race as a whole.

It's commonly accepted that we learn by experience: we see evidence of that on a daily basis. Why does that suddenly seem ridiculous when it's suggested that that is what our Creator had in mind for us?

Waste of time (1)

4D6963 (933028) | more than 4 years ago | (#29875501)

Why even bother with robots? So it looks more real and tangible than just a computer simulation? Maybe, but other than that it's a waste of time and resources. Anything you could learn you could learn from a simulation of those robots, since this is entirely an algorithmic problem. I guess these guys just like to play with robots.

Re:Waste of time (3, Insightful)

jfruhlinger (470035) | more than 4 years ago | (#29875645)

I imagine that there might be interesting results that come from putting objects into an environment where you don't control all the variables. I've heard of cases where the robots end up using features of their own hardware (which is generally cobbled together from off the shelf parts) that the researchers never anticipated.

Re:Waste of time (1)

element-o.p. (939033) | more than 4 years ago | (#29877515)

I ran into a great example of the kinds of things that digital simulations don't model in an entry-level digital electronics class I took many moons ago. I used a program called "Digital Works" to design my digital circuits before I would build them, since modeling electronic circuitry is far faster and far easier than actually building them (even on a breadboard). Eventually, I built a circuit that was complicated enough that the outputs of one stage no longer could provide enough current to trigger the next stage in the circuit. This was about 7-8 years ago, so I don't recall the details exactly, but IIRC, I was trying to drive too many LEDs from the output of a single gate. The circuit worked in Digital Works (because the logic was correct) but didn't work IRL because I didn't take into consideration how much current that many LEDs would pull.

In another problem I discovered in that class, you can have several milliseconds before a digital circuit "stabilizes" unless you have taken the trouble to normalize the circuit during design. For example, say you have one stage of a circuit that takes input from two other circuits. One of these input circuits consists of just a single gate; the other input consists of, say, twenty gates. As fast as digital circuits are, they are not instantaneous. So the last stage might see inputs of zero and one, then after a few milliseconds, zero and zero. To the human eye, it might appear as if both input circuits provide the correct output at exactly the same time. However, as my prof used to say, "nothing in the digital realm is ever simultaneous." This may or may not cause a problem. If the last stage is a trigger-and-hold type of circuit, it might latch onto the zero and the one rather than the correct output of zero and zero, giving incorrect output.

These are two very simple examples; powerful modeling software would almost certainly account for these types of errors. However, they illustrate the problem with simulations: a simulation is only as good as the foresight of the software designer...

Re:Waste of time (3, Insightful)

thesandtiger (819476) | more than 4 years ago | (#29876307)

Because intelligence isn't just a software thing. At least not in humans.

I recall reading about field programmable gate arrays being used in an experiment with genetic algorithms. They wanted to force the FPGAs to evolve to tell the difference between two different frequency sounds. Eventually they wound up with chips that accomplished the task in a variety of ways - ways that worked but for no explicable reason, some of them being ways that took advantage of tiny differences in the individual (identical, at least from a manufacturing perspective) chips, and even that required slight differences in the room's environment. This was years ago.

Simulations won't have those little idiosyncraces between individual units and thus might miss a huge component. Variation among individuals that is only in software misses the whole concept of variation between individuals that comes about from hardware, and also from the interaction between the two.

Re:Waste of time (1)

Monkeedude1212 (1560403) | more than 4 years ago | (#29877127)

Clearly you missed the point of the Great Movie "Short Circuit"

Re:Waste of time (1)

raftpeople (844215) | more than 4 years ago | (#29877621)

Our computers are not powerful enough to simulate reality to the same level of detail that real devices operate on and even if they were the level of programming would be enormous.

It is far more efficient to use real devices, although simulations can be very useful also.

Nanites (0)

Anonymous Coward | more than 4 years ago | (#29875583)

Oh, so it's just like that episode of Star Trek where tonnes of tiny intelligent robots take over the Enterprise.

worst spaghetti code ever (2, Insightful)

jfruhlinger (470035) | more than 4 years ago | (#29875611)

One question that intrigues me is just how human-readable the code produced by such genetic algorithms is. Some of the practical promise of this work is that it produces problem-solving code in ways very difficult from that of human programmers -- but how can such code be maintained by humans? It's a bit like making an engineer try to figure out how your lower intestine works.

Re:worst spaghetti code ever (0)

Anonymous Coward | more than 4 years ago | (#29875805)

This is why biology and biochemistry are such problematic fields

Re:worst spaghetti code ever (1)

SleazyRidr (1563649) | more than 4 years ago | (#29876171)

Maybe we'll need a new breed of biological progammers. Maybe eSurgeons?

Re:worst spaghetti code ever (2, Informative)

thesandtiger (819476) | more than 4 years ago | (#29876563)

From what I've read on the subject of machine evolution (mostly articles for the layperson), the end results are often completely baffling. It works, but the reason why isn't very obvious. In a few cases, I recall reading about evolved antenna schematics & shapes that worked REALLY well, but made absolutely no sense, or took advantage of things that engineers normally consider flaws/problems to be overcome in design.

So yeah, it'd probably come up with code & designs that are pretty difficult to parse, much like biological evolution. Pretty cool!

Darwin award (1)

drunkenkatori (85423) | more than 4 years ago | (#29877173)

Somewhere out there is a darwin award for species behavior. Our award might be for inventing our own successor.

Applications (1)

MonsterTrimble (1205334) | more than 4 years ago | (#29877335)

I RTFA and think it's geeky cool just for the robots, but I wonder about how to apply this to real life. How could we use the algorithms to improve our router firewall, or kernel scheduling, or even better dynamic playlists on our favorite music players? Could we have our coffee makers figure out when we would ACTUALLY like our coffee being brewed?

See the work of philosopher Patrick Grim from 2000 (1)

Paul Fernhout (109597) | more than 4 years ago | (#29877635)

"Evolution of Communication in Perfect and Imperfect Worlds "
http://sunysb.edu/philosophy//faculty/pgrim/pgrim_publications.html [sunysb.edu]
http://www.sunysb.edu/philosophy/faculty/pgrim/evolution.htm [sunysb.edu]
"We extend previous work on cooperation to some related questions regarding the evolution of simple forms of communication. The evolution of cooperation within the iterated Prisoner's Dilemma has been shown to follow different patterns, with significantly different outcomes, depending on whether the features of the model are classically perfect or stochastically imperfect (Axelrod 1980a, 1980b, 1984, 1985; Axelrod and Hamilton, 1981; Nowak and Sigmund, 1990, 1992; Sigmund 1993). Our results here show that the same holds for communication. Within a simple model, the evolution of communication seems to require a stochastically imperfect world. "

Dup dup! (2, Informative)

Culture20 (968837) | more than 4 years ago | (#29877637)

Same story more than a year ago: http://hardware.slashdot.org/hardware/08/01/19/0258214.shtml [slashdot.org]

And offtopic: $&^@%! Taco, what's up with the popups that sneak past Firefox popup blocks? I've dutifully allowed advertising to continue, despite having that checkbox I could click to turn ads off for good behavior. Do I really need to turn on adblock and noscript for /.? Really?
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...