Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Robotics

Swiss Experimenter Breeds Swarm Intelligence 144

destinyland writes "Researchers simulated evolution with multiple generations of food-seeking robots in a new study of artificial swarm intelligence. 'Under some conditions, sophisticated communication evolved,' says one researcher. And in a more recent study, the swarms of bots didn't just evolve cooperative strategies — they also evolved the ability to deceive. ('Forget zombies,' joked one commenter. 'This is the real threat.') 'The study of artificial swarm intelligence provides insight into the nature of intelligence in general, and offers an interesting perspective on the nature of Darwinian selection, competition, and cooperation.' And there's also some cool video of the bots in action."
This discussion has been archived. No new comments can be posted.

Swiss Experimenter Breeds Swarm Intelligence

Comments Filter:
  • by Kenja ( 541830 ) on Monday October 26, 2009 @01:42PM (#29874629)
    Of all the Nations, I would never have thought it would be the Swiss who would start the robot apocalypse. I had Germany in my betting pull...
  • First XKCD [xkcd.com] points out the obvious weapons end of things, now this guy announces how the brains have already been developed.
    • by 56 ( 527333 )
      Just don't try to unplug one of these robots, they'll do whatever it takes to get that sweet sweet electricity...
    • What makes you think life as we know it isn't nano swarm intelligence gone terribly wrong?

      • by Tiger4 ( 840741 )

        What makes you think life as we know it isn't nano swarm intelligence gone terribly wrong?

        It has gone terribly wrong. But it isn't nano. One look at Rosie O'Donnell will tell you that. Intelligence is open to debate for similar reasons.

    • by Xest ( 935314 )

      I know you're joking but this seems as relevant a place as any to point out some points about this, swarm intelligence is really about emergent properties of largely random systems.

      Ants for example will spread out randomly from the nest looking for food, when they find it they will return with a piece of it to the hive leaving a pheromone trail behind, other ants that are moving about randomly may cross this trail and follow it, they will reach the food too by following it and will also leave a pheromone tr

  • by Keruo ( 771880 ) on Monday October 26, 2009 @01:42PM (#29874633)
    To counteract his theories about swarm-intelligence, I sent the researcher link to 4chan.
    • That'll never work on machines. Not unless we build them with genitals.
    • by snarfies ( 115214 ) on Monday October 26, 2009 @02:38PM (#29875369) Homepage

      Actually, not a bad way to poison a bot's intelligence. Its been done before, with hilarious results (well, if you like 4chan-style humor), with a chatbot called Bucket. It was designed to pick up the basics of the English language and conversation techniques from random internet users.

      Then 4chan found it.

      http://encyclopediadramatica.com/Bucket [encycloped...matica.com] has the full story, along with quotes and screenshots.

      • you really can't balme 4chan for this.

        If you go to the forums [jonanin.com] you will notice there are no capcha systems or even basic registration systems to allow proper bans to be imposed.

        Every thread is quickly found by a forum spam bot and filled to the maximum page length with spam links.

        This is a failure of basic security.

  • In other news, an experiment by SourceForge, using it's meatspace zombienet "Slashdot" proved that even Google-owned YouTube can be brought to it's knees by enough people trying to watch the same video at the same time.

  • is this the beginning of replicators (from the Stargate universe)?

    • by db32 ( 862117 )
      Stargate story. Stargate world. Stargate creation. You can call it anything you want other than Stargate Universe. That show is TERRIBLE.
      • by MoFoQ ( 584566 )

        ummm...there's a difference between "Stargate Universe" and "Stargate universe".
        One is the show (which has potential...but is not the same "stargate" like the others that have come and gone)

        • by db32 ( 862117 )
          There is a difference between VC (Viet Cong) and VC (Venture Capital) but someone scarred by Vietnam is going to have an entirely different plan in mind when someone says "Let's go look for VC"

          As far as potential... That plot has holes you could fly a Goa'uld Mothership through! Even ignoring the whole video game garbage and that cheese like that was only ever part of intentionally cheese episodes of previous series...there are more logical inconsistencies than I can count. At first glance it wasn't te
          • My wife was watching the show I sat down for a couple minutes then promptly left when I saw one of the characters using a sharpened #2 pencil to write.
  • "Prey" is a pretty good scifi novel about this. It follows the tired cautionary-tale forumla, but like all of Crichton's novels has (some) basis in real research.
    • It doesn't follow a cautionary-tale formula, that is merely one of the many points that the book is following. I found it very refreshing and thought provoking rather than "tired". Also, these are big robots that researchers are using when compared to the nano sized particles that "Prey" used.
    • By sci-fi standards, its a pretty bad cliche-ridden paperback. Nothing novel or interesting. ( Yes, i was in a small airport with really limited bookshelves )
    • by Valdrax ( 32670 )

      "Prey" is a pretty good scifi novel about this. It follows the tired cautionary-tale forumla, but like all of Crichton's novels has (some) basis in real research.

      Not it's not; the formula is just scaremongering; and it's about as based in real research as Congo's gorilla hybrids, as Andromeda Strain's magical, energy-eating, crystal viruses, as Jurassic Park's spontaneous evolution of lysine synthesis genes in less generations than you can count on one hand, as State of Fear's wide-eyed acceptance of junk science that challenges the "religion" of global warming, and as Sphere's... whatever the f--- Sphere was supposed to be.

      Crichton is a hack that you stop being imp

  • "Crabs Take Over the Island" by Anatoly Dnieprov is somewhat based on the same idea, not in that swarm scale, but scary anyway.
  • Cool - but why use real robots for this? Seems like you'd be better off creating virtual robots in a simulated environment to develop the algorithms for something like this. You don't have to worry about dead batteries and hardware failures, and your simulations can run faster than real-time.

    Then again, maybe that's what the researcher did, and we're just seeing the end product applied to real robots.

    • Re: (Score:2, Insightful)

      by oldspewey ( 1303305 )
      Actual robots with flashing lights have a way better chance at going viral on YouTube.
    • From TFA: "First simulated in software before using actual bots, five hundred generations were evolved this way with different selective pressures by roboticists and biologists at the Ecole Polytechnique Fédérale de Lausanne in Switzerland in 2007."

      So yes, that's exactly what they did.

      Also, I'm sure this is at least a rehash of a previous /. article, because I remember discussing the deceptive behavior with the light-flashing. It's still interesting.

    • by Ardaen ( 1099611 )
      Don't worry, I didn't read the article either.
      I did however do a text search and came across this line: "First simulated in software before using actual bots"
    • by Weaselmancer ( 533834 ) on Monday October 26, 2009 @02:17PM (#29875105)

      Real hardware can hold more states than a purely digital system.

      I remember reading a paper (can't find it now though - darn it) about a guy who was doing neural net research with Xilinx chips. Same idea. Whenever an algorithm would do well he'd break it into "genomes" and pair them off with other successful programs.

      The board was a bank of Xilinx chips, the genomes were the programming files (basically 1s and 0s fed into the configuration matrix), and the goal was to get the thing to turn on and off when you would speak "on" and "off" into a microphone.

      It eventually started working. More interesting than that is what happened when he loaded the program into another board. It didn't work.

      It turns out the algorithm had evolved to take advantage of the analog properties of the specific chips in that particular board. The algorithm didn't see the board as a digital thing. It saw it as a collection of opamps, amplifiers, and other analog parts. Move the program to a board that is identical digitally, and it failed because the chips weren't analog exact. You wouldn't have seen that behavior in a purely digital simulation.

      • by Chris Burke ( 6130 ) on Monday October 26, 2009 @02:28PM (#29875249) Homepage

        It turns out the algorithm had evolved to take advantage of the analog properties of the specific chips in that particular board. The algorithm didn't see the board as a digital thing. It saw it as a collection of opamps, amplifiers, and other analog parts. Move the program to a board that is identical digitally, and it failed because the chips weren't analog exact. You wouldn't have seen that behavior in a purely digital simulation.

        Yeah, I remember that, but differently (or maybe it's a similar but different incident). What I recall is that he looked at the working design, and saw that it included a section that wasn't connected to anything else. Thinking this was just random waste, he removed it. Then it stopped working. Capacitive and inductive effects from the 'disconnected' section was affecting the main 'working' section and making a complicated analog circuit.

        In either case (and both are certainly possible outcomes), this outlines what is so awesome about Genetic Algorithms and the natural evolution that inspires them -- no preconceived notions about what the solution should look like. Whatever works, works, and that's literally all that matters. Us humans very often start with a picture in mind of what the answer "should" be, and it limits our thinking. On the other hand, a lot of times we have those preconceived notions like "this circuit should be digital not analog" for very good reasons, and we simply fail to notify the GA of that requirement. Which also makes GAs fun. :)

      • You would if you virtualized those analog parts.

        That's what most virtual modeling does, whether it's stress analysis in AutoCAD or reproducing that "tube warmth" from a solid-state amplifier through massaging the wave.

        • You would if you virtualized those analog parts.

          Yes, you would, but you'd also take a hit to simulation throughput, I'm guessing a pretty significant one, too. I'm not sure you'd gain anything specifically more useful than you would in a pure digital approach without this kind of low level detail, either. More interesting to add something a bit more "macro" in the sense that it's a high level behavior / feature you can see and evaluate by simple observation.

          My qualifications to guess? I'm the author

      • by janimal ( 172428 )

        It's true that the simulation will not simulate analog properties, but then again, that's not your desired behaviour. You want to be able to copy your boards, so your evolved "solution" can be manufactured after you've reached it.

        I read a similar article in Scientific American in the early 90's. The problem was recognizing 1000Hz signal on an input. The chips also learned to recognize it using their analog, instead of their digital, properties, and the evolved program could not be copied to a different chip

  • by Thanshin ( 1188877 ) on Monday October 26, 2009 @01:53PM (#29874767)

    they also evolved the ability to deceive.

    Obviously, once you've proved the entity has the ability to deceive, you must distrust any further results.

    • That's funny, but also a very interesting point.
      • That's funny, but also a very interesting point.

        They are "deceiving" each other, not the researchers : " By the 50th generation, some bots eventually learned not to flash their blue light as frequently when they were near the food so they wouldn’t draw the attention of other robots." I don't know if deception is really accurate in this case since to me it suggests intent while that's not the case here. Maybe natural "camouflage" like you see in animals is a better analogy.

    • Re: (Score:2, Insightful)

      by Hinhule ( 811436 )

      And you'll have to go back to your earlier results and wonder, when did it start deceiving?

    • Re: (Score:2, Informative)

      Lawyers?
  • Robotic Evolution (Score:3, Interesting)

    by allknowingfrog ( 1661721 ) on Monday October 26, 2009 @01:53PM (#29874773) Journal
    Do I understand this correctly? On top of superhuman strength and intelligence, we're now making steps toward robot evolution? When robots rule the world, do you think they'll debate whether or not they actually evolved from primitive PCs?

    "You fool! We were created in our present form by the great nerd in the sky! Shun the non-believer!"
    • Evolving their AIs, yes, not their physical capabilities. Genetic Algorithms have been in use for AI programming for quite some time now.

      On an unrelated topic, have you heard the good news of Robot Jesus?

    • Calm down, if you watch the video, you'll see we can easily outrun them.
    • "You fool! We were created in our present form by the great nerd in the sky! Shun the non-believer!"

      Sounds very much like the scenario in "Saturn's Children". All the humans have died off, and only the sentient artificial servants are left. The weird (well, one of them) is that they all have heard of "Evolution", but view it as some crazy old ancient religion that only the simple-minded would believe.

  • by fuo ( 941897 ) on Monday October 26, 2009 @02:05PM (#29874941)
  • Of course he was only joking! He knows just as well as we all do, that the outbreak of a Zombie apocalypse is way more likely than his swarm bots eating our brains. Because the robots won't reproduce exponentially by eating your brains, they will have to rely on the superior robotics skills of the zombies to survive.
  • We have not even realized swarm stupidity yet, how can they claim swarm intelligence?
    • I don't know where you live but around here we realized swarm stupidity a long time ago. Then again, I doubt the swarm has figured it out yet.

    • by Valdrax ( 32670 )

      We have not even realized swarm stupidity yet, how can they claim swarm intelligence?

      "Stupidity" can't exist without intelligence. "Stupidity" is what you call it when one intelligence rates the performance of another intelligence, and it's usually measured against a background of the subject species' average intelligence. (i.e. A "smart dog" is "smart for a dog," not smart compared to a human.)

      Until the robot swarm has identifiable intelligence to begin with, there's no more point in claiming stupidity than there is to claim stupidity for an amoeba or a chair. Therefore, it's not putti

  • by 140Mandak262Jamuna ( 970587 ) on Monday October 26, 2009 @02:41PM (#29875431) Journal
    Just with the limited human intelligence, limited resources and limited ability the researchers are able to create great levels of cooperation on mindless robots without any free will. Makes me wonder, if we are designed, as many Intelligent Design advocates claim we are, was the designer "intelligent"? With infinite wisdom and omnipotence and infinite resources, the Designer (or Designers) should have been able to create much more cooperative human beings. No wars. all peace. I wonder how they (the IDists) are able to square their ability ti "infer design" with the obvious "deficiencies of design".
    • by vlm ( 69642 )

      With infinite wisdom and omnipotence and infinite resources, the Designer (or Designers) should have been able to create much more cooperative human beings. No wars. all peace.

      Well, by Norse mythology, Odin, Vili, and Ve created the humans to fight in the final battle of Ragnarok, which wouldn't be much of a battle if humans just sit around all day and post to slashdot. The world is supposed to end in flames, perhaps Ragnarok will be started by a vi vs emacs flamewar on slashdot. Certainly the Norse mythology fits the human condition much more closely than the Christian mythology. Which would imply...

      I wonder how they (the IDists) are able to square their ability ti "infer design" with the obvious "deficiencies of design".

      If you really want to mess with the heads of IDers, ask them what they'd do i

    • soooooo... Destiny vs. free will & self-determination?

    • I have an even better question for ID'ers. What does THIS say about their so-called intelligent designer?: http://en.wikipedia.org/wiki/Penis_plant [wikipedia.org]

      Were we created by Beavis and Butthead? I can imagine the scene on Day 3 or thereabouts of the Creation:

      [God] Huh-huhuhuh-huh-huh. Hey, Lucifer. Check this out, dude. *zap!* It's a schlong cactus.
      [Lucifer] Heh-m-heh-heh. Yeah, that's pretty cool, m-heheh. Schlong. ...what's a schlong?

    • You assume that collective peace and smurfiness is the ultimate goal, and not individualistic peace/enlightenment/salvation/etc, which most religions tend to focus on.
  • Why even bother with robots? So it looks more real and tangible than just a computer simulation? Maybe, but other than that it's a waste of time and resources. Anything you could learn you could learn from a simulation of those robots, since this is entirely an algorithmic problem. I guess these guys just like to play with robots.

    • Re:Waste of time (Score:4, Insightful)

      by jfruhlinger ( 470035 ) on Monday October 26, 2009 @03:00PM (#29875645) Homepage

      I imagine that there might be interesting results that come from putting objects into an environment where you don't control all the variables. I've heard of cases where the robots end up using features of their own hardware (which is generally cobbled together from off the shelf parts) that the researchers never anticipated.

      • I ran into a great example of the kinds of things that digital simulations don't model in an entry-level digital electronics class I took many moons ago. I used a program called "Digital Works" to design my digital circuits before I would build them, since modeling electronic circuitry is far faster and far easier than actually building them (even on a breadboard). Eventually, I built a circuit that was complicated enough that the outputs of one stage no longer could provide enough current to tr
    • Re:Waste of time (Score:4, Insightful)

      by thesandtiger ( 819476 ) on Monday October 26, 2009 @03:55PM (#29876307)

      Because intelligence isn't just a software thing. At least not in humans.

      I recall reading about field programmable gate arrays being used in an experiment with genetic algorithms. They wanted to force the FPGAs to evolve to tell the difference between two different frequency sounds. Eventually they wound up with chips that accomplished the task in a variety of ways - ways that worked but for no explicable reason, some of them being ways that took advantage of tiny differences in the individual (identical, at least from a manufacturing perspective) chips, and even that required slight differences in the room's environment. This was years ago.

      Simulations won't have those little idiosyncraces between individual units and thus might miss a huge component. Variation among individuals that is only in software misses the whole concept of variation between individuals that comes about from hardware, and also from the interaction between the two.

      • I remember stories about that experiment. There were supposedly sections of circuitry that were not connected, but were crucial to the rest of the FPGA, and mere induction didn't explain the way in which they communicated (people were theorizing temperature variances expansion/contraction making the difference).
    • Clearly you missed the point of the Great Movie "Short Circuit"

    • Our computers are not powerful enough to simulate reality to the same level of detail that real devices operate on and even if they were the level of programming would be enormous.

      It is far more efficient to use real devices, although simulations can be very useful also.
      • by 4D6963 ( 933028 )

        How many things do you need to simulate when you're exploring AI algorithms? Besides, you control things better when things are simulated, you can speed things up tremendously, you can get as many units as you need (no need to build them) and then you don't have to work out the kinks of making a robot that interacts with its environment.

        I think that outweighs any of the bullshit effects another poster mentioned.

  • by jfruhlinger ( 470035 ) on Monday October 26, 2009 @02:58PM (#29875611) Homepage

    One question that intrigues me is just how human-readable the code produced by such genetic algorithms is. Some of the practical promise of this work is that it produces problem-solving code in ways very difficult from that of human programmers -- but how can such code be maintained by humans? It's a bit like making an engineer try to figure out how your lower intestine works.

    • Maybe we'll need a new breed of biological progammers. Maybe eSurgeons?

    • Re: (Score:3, Informative)

      From what I've read on the subject of machine evolution (mostly articles for the layperson), the end results are often completely baffling. It works, but the reason why isn't very obvious. In a few cases, I recall reading about evolved antenna schematics & shapes that worked REALLY well, but made absolutely no sense, or took advantage of things that engineers normally consider flaws/problems to be overcome in design.

      So yeah, it'd probably come up with code & designs that are pretty difficult to pars

    • I don't think it's possible to look at a neural network and understand what's going on other than from the math perspective in that you know in general terms what a neural network does (function approximation).
  • "Evolution of Communication in Perfect and Imperfect Worlds "
    http://sunysb.edu/philosophy//faculty/pgrim/pgrim_publications.html [sunysb.edu]
    http://www.sunysb.edu/philosophy/faculty/pgrim/evolution.htm [sunysb.edu]
    "We extend previous work on cooperation to some related questions regarding the evolution of simple forms of communication. The evolution of cooperation within the iterated Prisoner's Dilemma has been shown to follow different patterns, with significantly different outcomes, depending on whether the features of the model a

  • Dup dup! (Score:3, Informative)

    by Culture20 ( 968837 ) on Monday October 26, 2009 @05:23PM (#29877637)
    Same story more than a year ago: http://hardware.slashdot.org/hardware/08/01/19/0258214.shtml [slashdot.org]

    And offtopic: $&^@%! Taco, what's up with the popups that sneak past Firefox popup blocks? I've dutifully allowed advertising to continue, despite having that checkbox I could click to turn ads off for good behavior. Do I really need to turn on adblock and noscript for /.? Really?
  • Ok, what did this study teach us that wasn't learned years ago in (for example) Boids [red3d.com] (1987), Core War [corewars.org] (1984), and Tierra [wikipedia.org] (1991)? I mean, it's cool having little bots running around a tabletop and all, but I was simulating the same behaviors on my '286 back in the mid 90's.

    • Were your robots' behaviors evolved or scripted? If evolved, did the robots lie about finding food when they really found poison, then move to the real food quietly while the other robots "ate" the poison? Although this isn't new (/. covered the same or similar ecperiment last January), it's newer than 90's robot behavior.
      • I wrote a simulated world back in the 90's where this was possible. With just a handful of simple rules that can mutate from generation to generation, you can reproduce this behavior. In my game, the creature that could eat the most reproduced the most. Each generation mutates slightly. Eventually you get a mutation in a creature that warns away it's peers and then it gets all the food and makes lots of copies of itself. I don't have any idea whether I evolved such a creature...but it seems pretty like
      • Didn't bother with bots - I concentrated on the steak, not the sizzle, and stayed in software. Their being hardware robots rather than software agents doesn't really change the underlying behavior. My agent's behavior was evolved, though they didn't evolve the same behaviors as those in the experiment, they did evolve unique behaviors of their own.

        Which is my point, they discovered specific new behaviors that arose because of the specific features of their environment - something long known to occu

  • I'm surprised they didn't use a Windows symbol instead of those skull and bones.

    That apple looks like a familiar sticker one gets when buying a certain computer.
  • It slows down exponentially with time. No apocalypse there.

What is research but a blind date with knowledge? -- Will Harvey

Working...