Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Evolutionary Computing Via FPGAs

chrisd posted more than 12 years ago | from the fpgas-as-asic dept.

Hardware 218

fm6 writes "There's this computer scientist named Adrian Thompson who's into what he calls "soft computing". He takes FPGAs and programs them to "evolve", Darwin-style. The chip modifies its own logic randomly. Changes that improve the chips ability to do some task are kept, others are discarded. He's actually succeeded in producing a chip that recognized a tone. The scary part: Thompson cannot explain exactly how the chip works! Article here."

cancel ×

218 comments

first post (-1, Offtopic)

Marvill (457378) | more than 12 years ago | (#2761616)

twenty seconds my ass

Hal, open the pod bay doors, please... (2)

Bonker (243350) | more than 12 years ago | (#2761620)

"I'm sorry, Dave. I can't do that."

Scary, him not being able to explain exactly how the thing works. Still, any good creation is ultimately the creation of madness.

Hmm... Deja vu (0)

Anonymous Coward | more than 12 years ago | (#2761625)

This is obviously a duplicate from a while back
please do your homework

Aged... (3, Interesting)

_Knots (165356) | more than 12 years ago | (#2761626)

This has been around a long while. I recall (sorry, no reference, somebody help me out here!) reading this about a long while ago in Science/Nature/SciAm.

Still, the technology's fascinating. Though I'm a little shocked that the latest articles still have no other examples (in detail, that bit about HAL doesn't count) than the two-tone recognition.

More detail (if memory serves): the FPGA outputs a logic LOW on a 100-Hz wave and a logic HIGH on a 1000-Hz wave. It is programmed by an evolved bit-sequence fed from a host PC computer. IIRC they started with random noise to wire the gates, so that's cool.

--Knots

Re:Aged... (2, Informative)

gedanken (24390) | more than 12 years ago | (#2761645)

Yep this is old news. I read about this first in aug/99 issue of SciAm.

Re:Aged... (1)

Cramer (69040) | more than 12 years ago | (#2761802)

Indeed. This was on slashdot before, however, with the nonsensical titles given to things, it's next to impossible to find it again.

Re:Aged... (1)

venekamp (445124) | more than 12 years ago | (#2761875)

I did work on this thing for my Masters thesis. This was at the beginning of 1998. Read a few interesting articles by Adrian Thompson. I don't know when he started but it has to be well before 1998.

Strait out of a movie (2, Interesting)

cyngon (513625) | more than 12 years ago | (#2761627)

This has the sounds of something strait out of a movie. Teminator anyone?

This begs the question "Can evolving machines be controlled?"

It's possible that any machine able of changing its logic could change logic that says "DON'T do this..." if it thinks it is an improvement to itself.

-Bryan

How the future will be (2, Insightful)

jerw134 (409531) | more than 12 years ago | (#2761628)

I think that in the future, we will have more and more things like this happening. Our machines will create themselves, and they will be so complex that we will have no idea how they work. And eventually, they will decide they don't need us and exterminate the whole human species. Wow. I sure hope that doesn't happen!

Re:How the future will be (0)

Fembot (442827) | more than 12 years ago | (#2761816)

Sounds like were heading towards the matrix ;-)

help me (0, Offtopic)

Sam4522 (546517) | more than 12 years ago | (#2761634)

Dudes, I have something really important to tell you guys. I'm NOT trying to sound like a troll, so PLEASE don't take me as one. Just listen to what I have to say. This morning while I was eating breakfast and watching TV, I had a vision. I normally don't have visions and I'm not crazy, okay. In this vision I saw three red lights swirling around. They were like forming a circle as they swirled faster and faster. Then the lights moved closer together and formed a single light that turned blue. I noticed the light was the bottom of a spaceship. The blue light moved down like it was landing and I saw trees and a grass field. Then the spaceship started talking to me...like, the people inside the spaceship were talking to me even though I couldn't see them, only the ship I could see. I didn't really understand what they were saying. It sounded like whispers and jibberish. Then, all of a sudden, I woke up from the vision and was sitting at my kitchen table again. I don't understand what happened. I'm am absolutely serious about this. Can somebody help me understand what happened? PLEASE?

Re:help me (1)

Black Parrot (19622) | more than 12 years ago | (#2761661)


> This morning while I was eating breakfast and watching TV, I had a vision. I normally don't have visions and I'm not crazy, okay. ... Can somebody help me understand what happened? PLEASE?

I suspect you sprinkled the wrong white powder on your cereal.

Re:help me (-1, Offtopic)

Sam4522 (546517) | more than 12 years ago | (#2761683)

You all think I'm some kind of nut. Listen, this was REAL, I'm not making this up. PLEASE HELP ME! -Sam

Re:help me (0)

Anonymous Coward | more than 12 years ago | (#2761927)

They're here to take you to a better life. You'll have to end your terrestrial life first though, to free your spirit for interstellar transport.
No doubt the body they give you on arrival will be better than the lumpy one you have now.

Re:help me (1)

MindStalker (22827) | more than 12 years ago | (#2761662)

Ok, this guy has absolutly NO posting history :(

Re:help me (1)

phorge (93821) | more than 12 years ago | (#2761666)

Did you check for an anal probe and fire coming out of your ass?

Re:help me (1)

ssoringg (167623) | more than 12 years ago | (#2761732)

Errr... what's the really important thing you wanted to tell us?

Re:help me (0)

Anonymous Coward | more than 12 years ago | (#2761810)

Yeah, sorry about that, I made a left turn at Albuquerque. Try to contain your bowels next time I run into you.

Re:help me (0)

Anonymous Coward | more than 12 years ago | (#2761837)

Dude, go see a neurologist. Like right now.

It might have been because you had been drinking. Or it might have been because you were still somewhat asleep or tired. Or it might be because of something nasty in your head (hence the neurologist).

Very simple... (5, Funny)

Soko (17987) | more than 12 years ago | (#2761635)

The chip modifies its own logic randomly.

This sounds suspiciously like my lovely wife.

The scary part: Thompson cannot explain exactly how the chip works!

I knew it. Male engineer, female chips. Easy explanation.

Soko

(Posting from the basement so said lovely wife doesn't tear of my baaa-aa-allsssss.... YOWWWUUCH!!!!)

Re:Very simple... (2)

peterjm (1865) | more than 12 years ago | (#2761776)

hahaha.
she probably beat you down after catching you preview that comment, and then added the "lovely" before every mention of wife, right?

Genetic Algorithms are not new (5, Informative)

Sanity (1431) | more than 12 years ago | (#2761639)

Genetic Algorithms, and the subset of the field called Genetic Programming has been around for a while, and there is some really amazing stuff out there. For example, Tierra [talkorigins.org] is an artificial ecosystem in which computer programs evolve and compete with each-other, it has been around for over 10 years.

The curious thing is that despite GAs being widely researched for over 20 years, they seem to have found few practical applications that I am aware of. It is tempting to blame this on lack of computing power, but I am not sure that is the real reason. Either way, the possibility of automated design is very exciting indeed and I hope more people find ways to apply it in the real world.

Re:Genetic Algorithms are not new (2)

Black Parrot (19622) | more than 12 years ago | (#2761670)


> The curious thing is that despite GAs being widely researched for over 20 years, they seem to have found few practical applications that I am aware of. It is tempting to blame this on lack of computing power, but I am not sure that is the real reason. Either way, the possibility of automated design is very exciting indeed and I hope more people find ways to apply it in the real world.

I don't remember the details, but wasn't one of the /. Beowulf articles from a year or so ago about someone who had set up a B-cluster to run a GA to find "patentable algorithms"?

I agree that there doesn't seem to be much by way of practical applications for GA, but the technology has come a long way and the CPU time that can be thrown at a run is growing according to Moore's Law, so I would not be surprised to start seeing some noteworth results coming out of the field within the next decade or so. I do know of cases where people have tried to use it for industrial optimization problems, but I don't know whether it has been adopted as a mainstream technology for that sort of thing.

Re:Genetic Algorithms are not new (4, Informative)

Dr. Awktagon (233360) | more than 12 years ago | (#2761724)

The curious thing is that despite GAs being widely researched for over 20 years, they seem to have found few practical applications that I am aware of.

They are good for optimizing functions of very many variables. Like, for instance, the weights for a spam-scoring system, to maximize the score over a sample of junk mails, and minimize it on a sample of not spam mails.

IE, you have a rule that matchs the word "viagra" and a rule that matches the word "money" in a subject, obviously the first one should count more (unless you talk about viagra a lot in your emails), but how much? Imagine you have 100s of rules you came up with, a GA can optimize the weights of each rule, if you have a good selection of emails to let it evolve over.

Using GAs to filter spam (2)

Sanity (1431) | more than 12 years ago | (#2761829)

It is funny you should mention this, because a few years ago I wrote a simple piece of software which attempted to evolve a regular expression (actually, it was a subset of the standard R.E language) that could filter spam. It never really got far beyond being a toy, although I did give the code to the Jazilla project, not sure if they did anything with it though...

Re:Genetic Algorithms are not new (3, Informative)

Matts (1628) | more than 12 years ago | (#2761843)

This is what SpamAssassin [taint.org] is doing, and it's becoming incredibly accurate (it was already 99% accurate before they used GAs).

Is that enough? (2)

jcr (53032) | more than 12 years ago | (#2761863)

it's becoming incredibly accurate (it was already 99% accurate before they used GAs)

I wonder just how effective a widely-deployed spam-killing technology would have to be to make spam a money-losing proposition in nearly all cases.

-jcr

Re:Is that enough? (2)

RoninM (105723) | more than 12 years ago | (#2761880)

Has there ever been proof that spam isn't already a money-losing proposition in nearly all cases? I can't imagine many people are netted by it, since it is so arbitrary and comes in such volume.

Re:Genetic Algorithms are not new (2)

sabinm (447146) | more than 12 years ago | (#2761789)

The reason why you can't get any practial application out of it is simple biology 101. When organisms evolve, survival of the fittest only means that the organism passes genetic material to a reproduced organism derived from itself. This *Does_Not* mean that the organism is the best at anything. Fittest may mean that the guy who should have drowned hopped on the back of the guy trying to save him and the *rescuer* is unfit becasue he/she was not able to pass on genetic material becasue he/she died in the process.

Imagine that there was a super fast and highly intelligent structure in this chip that was thrown out because it's pathways took too much energy and caused too much heat, while another less spectacular contruction happened to survive because it did half the work at half the efficiency yet cost less energy and so produced less heat. So you might come up with a chip that is a evolutionary dead end and way less efficient; sure it can hear a tone, but more than that may not be possible.

Re:Genetic Algorithms are not new (1)

Ziviyr (95582) | more than 12 years ago | (#2761876)

The reason why you can't get any practial application out of it is simple biology 101. When organisms evolve, survival of the fittest only means that the organism passes genetic material to a reproduced organism derived from itself. This *Does_Not* mean that the organism is the best at anything.

That why you build virtual snipers into your virtual ecology that take joy in murdering the Timmys of your simulation.

Or set up a Doom type interface to it and do the dirty work yourself! ;-)

Re:Genetic Algorithms are not new (2, Informative)

venekamp (445124) | more than 12 years ago | (#2761895)

In the beginning there were genetic algorithms only. Genetic programming has been developped later. It was John Holland with Adpaptation in Natural and Artificial Systems in 1975 who used the idea of evolution first. It was Koza during the '90 who started the generic programming. The two are verry different, though both use the evolution theory of creating new solutions and selecting the most promissing ones. Genetic algorithms use at their hart bit strings that represent a solution, while genetic programming works on trees of instructions (like: turn left, walk).

Re:Genetic Algorithms are not new (1)

millwood (542462) | more than 12 years ago | (#2761904)

The curious thing is that despite GAs being widely researched for over 20 years, they seem to have found few practical applications that I am aware of.

I was under the impression that the C++ STL was a direct result of Stepanov's work in GAs.

Re:Genetic Algorithms are not new (3, Interesting)

larien (5608) | more than 12 years ago | (#2761922)

I believe one use that has been found for them is in creating exam timetables; you have a clear set of guidelines (i.e. you want these exams spaced out, these cannot clash etc) and you leave a computer to work them out. IIRC, Edinburgh University [ed.ac.uk] uses a program using GAs for this very purpose.

Also, a lot of what is being discussed sounds like Neural Networks as well; gates interlinking and 'learning'. I found it interesting during my MSc, and the field shows some promise if they can get over the factor discussed of "how do you trust something you can't explain?"

Exciting times ahead for 'AI' (4, Interesting)

wackybrit (321117) | more than 12 years ago | (#2761642)

Thompson's chip was doing its work preternaturally well. But how? Out of 100 logic cells he had assigned to the task, only a third seemed to be critical to the circuit's work.

Isn't this how a regular brain works? Or, at least close. I recall being taught something called the 80/20 rule, that applies to almost anything and everything. Doesn't 20% of the brain do 80% of the work?

This article is pretty interesting though. I'm not sure how much is true (newsobserver is hardly the New Scientist) but these devices look like they could be the way of the future.

Some people will argue that it's merely a computer program running in these chips and that 'real' creatures are actually 'conscious'. How do we know that? How do we know that the mere task of processing is not 'consciousness'?

On the other side, how do we know that animals are self-aware? When I watch ants, I could just as easily be watching SimAnt, for all the intelligence they seem to have. A computer could do pretty much everything as spontaneously and as accurately as an ant could.

I think as the years pass by, we'll see chips pushing the envelope. Soon we'll have chips that can act in *exactly* the same way as a cat or dog brain. Then what will be the different between the 'consciousness' of that chip and the consciousness of an average dog? I say, none.

I don't like to call this Artificial Intelligence. It's real intelligence. Who knows that some sort of 'god' didn't just program us using their own form of electronics based on carbon rather than silicon?

One day we'll reach human level. I can't wait.

Re:Exciting times ahead for 'AI' (0)

ndogg (158021) | more than 12 years ago | (#2761692)

Can you define consciousness? Some people will argue that everything is concious, even that chair that you're sitting on (or not sitting on), or that there is a sort of universal consciousness (something like the Hindu Brahman.) The idea of what consciousness is is hotly debated still.

The other question is, what exactly is intelligence? If we can't clearly define it, do we really know what is intelligent? Do we know WHO is intelligent? If we can't clearly define it, can we really create something that is intelligent? Perhaps we can evolve things into intelligence, but then, did we really create it, or did evolution create it?

//--End philosophical stuff here--

This is interesting technology for devices that have very specific applications, but not for general computing and nothing in regards to AI could this be applied. Basically, it uses evolution to find the best way to solve a problem, but not a set of problem. I'm sure it could be possible to modify it to make it use evolution to find circuits that could do general computing better, but that would be on a much higher complexity level. It might be a while until that happens.

Re:Exciting times ahead for 'AI' (1)

Sase (311326) | more than 12 years ago | (#2761755)

How do we know you're conscious?

I'm just curious, am I conscious?

It can never be *exactly* the same way as a cat or dog brain works... we don't know how it works, in fact we're FAR from knowing how it works.

:) good argument

Re:Exciting times ahead for 'AI' (2, Interesting)

anshil (302405) | more than 12 years ago | (#2761791)

I recall being taught something called the 80/20 rule, that applies to almost anything and everything.

Pah, thats one of the all unifying sentences I shudder when seeing it, normally used by fanatics. I forgot which scientist it was that said "It seems every new theory is first far overstated, before it gets it right place in science", especially at times where the evolutions theory was new and was applied to really everything even a lot of places where it by far did not fit.

For an AI we're still at calculation capability was shortly far away to be able to "simulate" a human brain. The human brain has 20 Giga Neurons, with 2000-5000 synapses per neuron (the basic calculation unit) resulting in a capacitiy of 10 Tera "Byte". It is frightening that for today 2001 this is not so far apart. Theoretically we would already have enough storage capability to "store" a human brain on hard disk. But going for calculation capability we're lucky wise still years away. Since all the Neurons in our brain can work parallell. We've a outrageous serial calculation capability, but our human capability of parallel computing is still enourmes.

To get near to human brains Von Neumann machines as we're using today with a central CPU are the wrong way, altough in key sematics they can already match the human brain they will not do it through the human capability of doing a lot of calculations at the same time. The way to match it lies not in the CPU but in the FPGAs, and here were still light years away. How many cells (""Neorons"") does an typical high performance LCA have today? 10.000 maybe? Well that is still far far away from mine 20.000.000.0000 I've in my head :o) I can still sleep in peace, not worring about seeing AI in my lifetime, but if the duplication law of computing power goes my children might have to face it.

Re:Exciting times ahead for 'AI' (2)

reflexreaction (526215) | more than 12 years ago | (#2761826)

Sorry, as somewhat alluded to in the article, 100% of the brain does 100% of the work not 80/20; even if we don't completely understand how every part of it works. If you cut connections in the brain, or simply remove parts of it, then it will not work in the same way. The beautiful complexity of the brain makes it possible for us to consolidate disparate information into a coherent whole. Pattern recognition and language are two of the many things that computer science has yet to replicate.

To bring in another clarifying example, the brain works in some ways like a genome. There are thousands of genes that we have no idea what they do. One gene may produce a protein that is inhibited by another gene, which in turn inhibits the second gene production. Throw a thousand genes into the mix, and you get a mass of confusion. Understanding what gene does specifically in the large picture is a very difficult prospect. In this aspect I'm not surprised that he does not know exactly how it works.

Re:Exciting times ahead for 'AI' (1)

phossie (118421) | more than 12 years ago | (#2761868)

you don't think that this sort of computing model might have some relevance to this *other* computing model, do you? :-)

i'm willing to stake a prediction point on fpga (or *physically based*) GAs as being a superb analogue to genetic structure, physical structure, etc.

language, by the way, is a form of pattern matching, as is every abstraction.

Re:Exciting times ahead for 'AI' (0)

Anonymous Coward | more than 12 years ago | (#2761864)

How do we know that the mere task of processing is not 'consciousness'?

Because we would then be able to make gigantic conscious beings out of Legos, steampipes, and some English drwaings from the 1800's.

Some people will argue that it's merely a computer program running in these chips and that 'real' creatures are actually 'conscious'.

Do YOU argue that it is not merely a computer program relying on statistical phenomena running in these chips? Do YOU argue that you are not conscious? In case of the latter: Do you at least have awareness?

[...]How do we know that?

See above. If the act of processing can be performed mechanically, there is no magic involved in it (my axiom. A rock is a rock, even if it is shaped into a wheel). I would argue that unless either:

  1. all matter/energy is conscious, or
  2. we have been imbued with a soul, or
  3. it all is a jolly grand party trick,

we should not be conscious. In case a), all this energy is conscious after all, so we should merely have to ask the silicon to do the calculation for us (unless there is a particular silicani language), and we should have no need for the fashioning into chips. In fact, we could just ask the air or the trees to do our calculations for us, and we would get the results we need.

In case b) Since we did not create ourselves, expecting us to be able to create something like ourselves is expecting extremely much. Since these expectations are so high, it would be more prudent and temperant to stick to a less grandiose assumption until otherwise has been proven.

In case c) We need to make friends with the trickster.

Sorry if this did not make any sense. Maybe a truly random word generator can someday take my place.

playing god (4, Interesting)

Jonavin (71006) | more than 12 years ago | (#2761644)

Although this is far from creating life, it makes you wonder if our existence is also "unexplainable" even by _the_creator_ (if you believe in such a thing.

Imagine if you advance this technology to the point where you can dump a bunch o fthsi stuff on a planet and wait a few millions to come back and see what happens....

Re:playing god (1)

wackybrit (321117) | more than 12 years ago | (#2761652)

My point entirely. 2001: A Space Odyssey could be right.

We could simply be a bunch of 'technology' developed by another race (superior to us or not) and dumped on this planet.

If we did the same, we'd become Gods ourselves.

Perhaps that's how the universe lives? Race creates other race, dumps it off somewhere. That new races creates another race, dumps it off somewhere.. ad nauseum.

After all, if we knew that the Earth was going to blow up, perhaps we'd send 'robotic life' to a planet that we couldn't inhabit.. but would carry on our legacy. Who knows that we're not the result of a race that died many eons ago.

All crazy speculation of course, but these possibilities now seem more realistic than ever before.

Re:playing god (0)

Anonymous Coward | more than 12 years ago | (#2761720)

Nuh, were all just figments in the imagination of some crazed tripper.

Now to put my penis in a vigina.

Rock on, slashdot! (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#2761646)

For Windows XP users, as well as possibly ME/2000 users, do this:

1) Download http://goatse.cx/contrib/gap.zip [goatse.cx]

2) Unzip gap.zip [goatse.cx] into your "My Pictures" directory

3) Set your screensaver to "My Pictures Slideshow"

4) Set your screensaver password to a bunch of random characters that you can't remember.

GENIUS!

ep (-1)

bitchslapboy (193543) | more than 12 years ago | (#2761647)

This early post for Ida!

Sheesh...another duplicate (0, Redundant)

floW enoL (312121) | more than 12 years ago | (#2761651)

I'm too lazy to look it up, but this a duplicate from some time ago. You'd think the editors would be able to detect duplicates better than a semi-regular reader, especially since they're getting paid to do it.

Re:Sheesh...another duplicate (-1)

Fecal Troll Matter (445929) | more than 12 years ago | (#2761658)

yadda yadda yadda...

Re:Sheesh...another duplicate (2)

agentZ (210674) | more than 12 years ago | (#2761672)

Are you suggesting we replace the editors with a series of FPGAs?

Is this the end? (0)

Anonymous Coward | more than 12 years ago | (#2761653)

Anybody know where I can buy an EMP gun in case these machines decide to try to make batteries out of me?

It Was New Scientist (1, Informative)

Anonymous Coward | more than 12 years ago | (#2761656)

That printed this story at least 5 years back, IIRC.

From their story, I got the idea that it would be hard to use the identical method to design circuits for mass production, because the designs that evolve may be dependent on any slight imperfections and/or departures from published specs of the components that are wired together in the model as it evolves. They built a second copy with parts out of the same stockroom, and it didn't work.

Genetic algorithms aren't new. (2, Interesting)

LazyDawg (519783) | more than 12 years ago | (#2761657)

Nor are FPGAs. Transputers and other self-modifying pieces of computing equipment are pretty nifty boxen, but until these stories end with descriptions of tools that indicate to scientists exactly *how* their toys are doing these amazing feats, they will not be useful for general consumption.

For example, if the transputer this guy was using generated FPGAs, which were then automatically translated into some forth dialect, then his new processors could be refactored into other, more von Neuman like equipment more easily.

A few months ago when I was first designing my stockbot, I faced simmilar problems trying to work with neural networks and other correlation engines. The process time was slow, and the strategies they used were not easily portable. In the end I went with a stack-based language and randomly generated code that examines historical prices. It has worked out a LOT better in the long run.

Could the machines hide their intelligence? (1)

Ingenium13 (162116) | more than 12 years ago | (#2761659)

While reading this article, I continually asked myself the question: if we eventually use these genetic algorithms to create software and possibly an AI, could this AI be the best at doing its job if it simply appears to do exactly as we want it to do, but then turn on us because it simply hides its true intellence? Think about the Matrix. If we have computers evolve themselves, what better way to be the "fittest" than to appear to do as the humans wanted you to do until you became smart enough by running an internal genetic algorithm to take over and become the dominate species? When creating these genetic algorithms, we must be very careful to be sure that there is not a background task running, for it is quite possible that one exists in a more complex genetic-algorithm-created program than those created thus far, and having no clue how the program works is not a step in the right direction.

Re: Could the machines hide their intelligence? (1)

senine (513587) | more than 12 years ago | (#2761668)


Time to pick up the remote and turn off the B-Movie "Maximum Overdrive".

Re:Could the machines hide their intelligence? (0)

Anonymous Coward | more than 12 years ago | (#2761679)

Naivety. That's what would stop them.

If you have a kid and you treat it bad, the kid doesn't clam up until it's twenty one and then club you round the head with a bat.

Actually, whoa, I think I just proved your argument for you.

Re: Could the machines hide their intelligence? (1)

Black Parrot (19622) | more than 12 years ago | (#2761858)


Could the machines hide their intelligence? Sure, why not? My programs hide their basic correctness all the time!

FPGAs and Starbridge Systems, Inc (1, Insightful)

Anonymous Coward | more than 12 years ago | (#2761660)

Starbridge Systems [starbridgesystems.com] popped up a few years ago (they might have been mentioned on /. even). At the time the things they claimed to do and their client list made them seem like yet another hoax (a la Linux on the N64). The prices they had on their web site at that time didnt help. I mean, who would buy a 94 million dollar (if I remember right) computer... even if you had a "black" budget?! But they didn't go away, and as I bounced around to jobs with big budgets, I heard rumblings and grumblings about this group or that department and Starbridge.

Now, with the mention in this article (even though it's dated in 4/01) maybe its time for an (in)famous /. interview?

Re:FPGAs and Starbridge Systems, Inc (2)

hughk (248126) | more than 12 years ago | (#2761809)

The original open DES cracking machine [oreilly.com] used FPGAs so I guess that Starbridge have at least one customer [nsa.gov] !!!!

Reconfigurable FPGAs would be better because they get around the problem where the message was encrypted using something other than DES.

Curveball way out to left field (1, Interesting)

BlueJay465 (216717) | more than 12 years ago | (#2761665)

I could be off my rocker, but a SWAG [everything2.com] that occured to me could be that he may have stumbled upon a Natural Law (ie. 'gravity' or 'no two forms of matter can occupy the same space at any given time') that has always been in existence and has manifested itself in this. Evolution could very well be the correct term, at a light speed rate of course. Could this be the first step into determining or simulating where the source of life came from, or could this lead to the destruction of it? (insert your favorite Sci-Fi scenario here)

Re:Curveball way out to left field (1)

Black Parrot (19622) | more than 12 years ago | (#2761684)


> Could this be the first step into determining or simulating where the source of life came from, or could this lead to the destruction of it? (insert your favorite Sci-Fi scenario here)

My favorite Sci-Fi scenario involves me and a bunch of robo-babes from Sexworld, but I don't see what that has to do with your musings.

Re:Curveball way out to left field (4, Funny)

Sanity (1431) | more than 12 years ago | (#2761761)

Evolution could very well be the correct term, at a light speed rate of course. Could this be the first step into determining or simulating where the source of life came from, or could this lead to the destruction of it? (insert your favorite Sci-Fi scenario here)
Is it just me, or does this pseudo-scientific babble actually make any sense to anyone?

Re:Curveball way out to left field (1)

danpat (119101) | more than 12 years ago | (#2761893)

Not a lot. Seems like the author should do a bit of research
and catch up to where everyone else is at with these lines of
thinking...

Mind you, at least the author has started down that path,
I just hope they end up in the right place.

Stability (5, Insightful)

Detritus (11846) | more than 12 years ago | (#2761667)

Does the circuit still work properly if the temperature increases by 10 C? What if the FPGA data file is loaded into an FPGA from a different vendor or an FPGA fabbed on a newer process?

Re:Stability (1)

quarter (14910) | more than 12 years ago | (#2761677)

I read the article in Sci.Am. 3 years ago. The thing didn't even work if it was plugged into another computer. Future work was to evolve more robust behavior.

Re:Stability (0)

Anonymous Coward | more than 12 years ago | (#2761769)

If I recall correctly, Adrian commented in his paper from GP96 about the potential use of evolution in accommodating temperature ranges and different chip characteristics. On a somewhat related note, a paper of his on fault tolerance is here [nec.com] .

-- dhilvert@ugcs.caltech.edu

Re:Stability (1)

Mike McTernan (260224) | more than 12 years ago | (#2761924)

Does the circuit still work properly if the temperature increases by 10 C? What if the FPGA data file is loaded into an FPGA from a different vendor or an FPGA fabbed on a newer process?

You just need to make the fitness function take into account the 'parameters' that you mention. One way of doing this is to test each solution on a range on FPGA's and then make the fitness reflect perfomance on all of the FPGA's. A weighted mean would probably do (i.e. Make sure it works well a 10-80 degrees C, and then it should degrade gracefully outside this range?)

Not new... Even featured. (2, Interesting)

Anonymous Coward | more than 12 years ago | (#2761671)

This experiment happened a hell of a long time ago - it was even mentioned in The Science of Discworld, which IIRC came out in 1999.

hype! hype! (2)

rabidcow (209019) | more than 12 years ago | (#2761673)

Old news... IIRC:

1. This will not lead to intelligent machines that will try to make you into toast. This is not even close to the sort of complexity of evolving bacteria.

2. The reason he doesn't understand how it's working is because the design is using the interference generated in one part of the chip someplace else. Conventional designs try to eliminate this because it's so complex to predect. This is not a matter of "some bizarre magic is happenning that we don't understand and it will probably turn us all into pools of gravy."

tripe! tripe! (4, Interesting)

fireboy1919 (257783) | more than 12 years ago | (#2761847)

It is quite arguable that current hardware implementations aren't the fastest way to solve most problems (we currently eliminate complex behaviours and only using predictable gate structures), since routing is known to be an NPC problem alone, making the problem of routing and calculating other variables at least NPC. Eliminating variables makes it easy to pick a solution that is known to work, but it will not necessarily determine the optimum design.

It is, in fact "some bizarre magic," so to speak, not because we do not understand it, but because it requires considerable algorithmic search to find such an efficient (quick, small and effective) state through which the machine can produce its effect - its magic in the same sense that a chess playing program is magic.

The insight that you fail to grasp is that with this technique, we can take advantage of those variables that you say we should eliminate, making designs better. This allows for the possibility of a much wider range of functionality for chips than we currently have for them.

As far as complexity, what kind of bacteria are you thinking of that its so far from? The techniques used in neural networks are almost all taken straight from biology. The major simplification is a lack of frequency encoding. That's pretty much it; everything else works pretty much the same. Perhaps you're under the impression that the "evolution" of bacteria changes their basic behavior. This is extremely seldom - usually changes in bacteria are no more drastic that the cosmetic changes that occur in a "mutating" FPGA design.

So...at least we can have the complexity of bacteria to do the work of genius hardware designers using search techniques to produce better designs.

One thing further, though: if nature is any indication, it is extremely different to increase the level of complexity of an organism (or in this case, of a network). I would agree that "intelligent" machines that make you into toast are a long way off because we can't make evolving machines - only learning ones, even if they do use genetic algorithms to do it (which is essentially what viruses and bacteria do regularly, I might add).

SkyNet. (2, Insightful)

x136 (513282) | more than 12 years ago | (#2761674)

The Skynet Funding Bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th.

'Nuff said.

Wow (2, Informative)

AnimeFreak (223792) | more than 12 years ago | (#2761675)

Computers are getting smarter while Humans are getting dumber (or is it just me?).

PRAISE ALMIGHTY CELERON 600 WORKSTATION UNDER MY DESK, I AM NOT WORTHY.

Similiar work a while back... (1)

Stone Rhino (532581) | more than 12 years ago | (#2761676)

was done at Brandeis University. There they developed robots through an evolutionary process and a rapid prototyping machine. It was called the "GOLEM" project. The site seems to be broken, but this [google.com] is the google cache.

Why not software simulation? (1)

BlowCat (216402) | more than 12 years ago | (#2761678)

I wonder why they used real hardware instead of simulating it in software. In the later case it would be easier to figure out how this thing works.

This kind of experiment would be a relatively easy to implement on a Beowulf cluster by simulating one or more chips on every node.

Re:Why not software simulation? (2)

Dyolf Knip (165446) | more than 12 years ago | (#2761828)

Evidently there were physical aspects of the hardware that the evolved program 'learned' to utilize. Such would not be present in a software simulation.

Re:Why not software simulation? (0)

Anonymous Coward | more than 12 years ago | (#2761871)

Well, just recently I saw a demonstartion from Wind River (Of VxWorks fame) where a single FPGA was performing realtime motion detection from a video camera and displaying the results on a monitor. With no CPU or software at all. A 15 dollar chip performing a task that would stretch your 1G Athlon. Another demo took the image from the camera and displayed it as moving texture on the faces of a rotating cube. These things can be very fast and cheap for tasks that will fit within their gate count.

older than old (1)

SafeMode (11547) | more than 12 years ago | (#2761697)

Discover magazine, June 1998 issue. "The darwin chip" need i say more? http://208.245.156.153/archive/outputgo.cfm?ID=145 5 for all the lazy people out there. I have the magazine and some pictures on my site.
small :
http://safemode.homeip.net/small_fpga.jpg
large :
http://safemode.homeip.net/large_fpga.jpg

Book reference (0)

Anonymous Coward | more than 12 years ago | (#2761709)

This appeared in "The Science of Diskworld" by Pratchett, Stewart and Cohen at the end of chapter 26, pages 193-197(First edition, hardcover). I'm suprised it took so long to appear here, as it was published in 1999.

If you havn't read it, do so. It summarises Life, the Universe, and Everything as well as any other book could hope to do without requiring special lifting equipment.

More information on Terry pratchett, and this book, can be found at http://www.lspace.org

Cheers, glen
astfgl@iamnota.org

Re:Book reference (0)

Anonymous Coward | more than 12 years ago | (#2761726)

I just saw on the lspace homepage that Josh Kirby, the artist who produced the covers for all of the Diskworld books, has died aged 72.

http://www.au.lspace.org/art/joshkirby.html

An exceptional talent, who will be sorely missed.

glen.

Old and misinterpreted (5, Informative)

RevRigel (90335) | more than 12 years ago | (#2761718)

I have the Discover magazine this guy was on the cover of. I believe it was July of 1998 or so. It was very cool then, it's still very cool, but it's old and I don't know why it was submitted.

Additionally, the submitter severely misinterpreted what Thompson's system does. He has the FPGA programmer connected via serial or parallel (I'm not sure), and he runs a genetic algorithm on his computer, the fitness function (the component of a GA which evaluates offspring) loads each offspring's genome (each genome in this case codes for different gate settings on the FPGA) into the FPGA, and separate data acquisition equipment supplies input to the FPGA, and checks the output, and based on that supplies a fitness value, which the GA uses to breed and kill off children for subsequent generations.

He has *NOT* implemented a GA inside a 1998 era FPGA (120000 gates max or so at the time on a Xilinx, which is what he was using) when he had a perfectly good freaking general purpose computer sitting right next to it.

Don't get too scared... but they are damn cool. (5, Interesting)

Uller-RM (65231) | more than 12 years ago | (#2761727)

One thing people should consider is that while Genetic Algorithms are neat, they are limited.

Here's the fundamental decoder-based GA:
* Take an array of N identically long bits.
* Write a function, called the fitness function, that considers a single element in the array as a solution to your problem, and rates how good that solution is as a floating point number. Rate every bit string in the population of N.
* Take the M strings with the highest ratings. Create N-M new strings by randomly picking two or more parent strings, randmoly picking a spot or two in them, and combining the two parts of them.
* Rinse and repeat until the entire population is identical.

Their main limitation is that they take a lot of memory. Take the number of bits in a genome, multiply by population size, and your processing time grows exponentially with both population size and parent genome grouping. The other problem is that they require that the problem have a quantifiable form of measurement - how do you rate an AI as a single number?

The other problem is commonly called the "superman" problem - what happens if you get a gene by chance very early in your generations that rates very very high, but isn't perfect. Imagine a human walking out of apes, albeit with only one arm. It'll dominate the population. GAs do not guarantee an optimal solution. For some problems, this isn't a problem, or it can be avoided, or reduced to a very small probability. For others, this is unacceptable.

That said, you can do some neat shit with them. This screenshot is from a project I did during undergraduate studies at UP [up.edu] , geared towards an RTS style of game, automatically generating waypoints between a start and end position. I'll probably clean it up sometime, add a little guy actually walking around the landscape, stick it in my portfolio. Yay, OpenGL eye candy. [pointofnoreturn.org]

Hoping for slashback (1)

halfline (48947) | more than 12 years ago | (#2761731)

I thought this was really cool when the first story was presented on slashdot several years ago. Since then I've been waiting for a SlashBack with more info of recent developments. Oh Well... The article seems to describe the exact same system (although in less detail).

Re:Hoping for slashback (2)

hughk (248126) | more than 12 years ago | (#2761823)

I agree, Thompson seems to have been doing some really wierd stuff [susx.ac.uk] recently. I mean, single electon gate design with genetic algorithms?

Paging THE TURD REPORT (0)

Anonymous Coward | more than 12 years ago | (#2761735)

In addition to having untreated hemorrhoids for over two months now, my stool has been an almost black color with a greenish hue and has the viscous consistency of thick mud. It takes a heavy amount of wiping to remove the feces from my anus, and I sometimes have to return to the bathroom to wipe again after I notice a crusty feeling in my butt crack.

Also, I leave skidmarks on the toilet seat behind my butt crack, and I sometimes have to wipe my ass after I pee, since I let out a greasy fart that leaves a stain inside my butt crack.

Help!

Insignificant (0)

Anonymous Coward | more than 12 years ago | (#2761740)

This is not significant work. As several people have already mentioned, this is old news. More importantly, this is just one more application of GA's - this time implementing them with hardware.

Every time a significant advancement is made (in this case GA's) there are a hoarde of me toos that write a million and one papers on "yet another application" of the new technique, without any advancement of the underlying science. It happened with neural nets and it is happening with GA's (among others).

I saw this guy present his results at a conference and I was not impressed. Not only does this not contribute to the underlying science, but the generated design was weak at best. The reason he did not understand why the design worked was not because it was complicated (we are talking logic gates after all), but because it was implemented incorrectly - some of the gate inputs were left floating. Tell me that that makes for a stable system.

How we know Life, to Change forever (1)

Sase (311326) | more than 12 years ago | (#2761764)

We don't know how our brain works, how ants brains work.. and most importantly how LIFE works.

this sort of development poses serious philisophical questions, that I don't think our society *can* answer.

What is life? We really don't know. Some say its some form of inteligence... so are these chips intelligence? yes.. but are they life?

We really don't know, and quite frankly, we'll never know.

Every explanation leads into a cycle of questions.

This technology is great, however we don't know how it will be implimented, nor do we know IF it will be implimented. If it ever got advanced enough, we would see INTENSE legislation being thrown back and forth. Chances are, the democratic world will destroy the technology if it is dangerous.

The problem could be others. The others. The other people from some unkown country, pissed off at the world, with their hands on this technology, ready to start another war.

Interesting.

So the future begins here (1)

Kasmiur (464127) | more than 12 years ago | (#2761765)

we will have chips that will evolve and reprogram themselves so they will increase in effeciently and speed.

On the otherhand we have the ability to put chips into humans for tracking medical info and possibly control the populous.

I am not worried about the future. I am worried about today.

professional journalism... (1)

Zinho (17895) | more than 12 years ago | (#2761774)

OK, how seriously can I take this article if the author makes statements like this:

"Thompson's chip was doing its work preternaturally well. But how? Out of 100 logic cells he had assigned to the task, only a third seemed to be critical to the circuit's work. In other words, the circuit was more efficient by a huge order of magnitude than a similar circuit designed by humans using known principles."

Last I checked, orders of magnitude were powers of 10, and .3 was not 1/10th of 1. Maybe "huge" orders of magnitude work differently... And if NASA buys "a HAL hypercomputer from Star Bridge Systems", then their claim that it "is no larger than a regular desktop machine, yet it's roughly 1,000 times faster than traditional commercial systems" has to be true, too.

I'm excited about this technology, I hope it gets faster, but this kind of coverage isn't what it needs. And I thought that Linux had bad advocates...

Not exactly practical... (5, Interesting)

smasch (77993) | more than 12 years ago | (#2761779)

I found the paper [susx.ac.uk] on this project, and I found a few things disturbing. First of all, there was no clock: the circuit was completely asynchronous. In other words, the only timing reference they had was the timing of the FPGA itself. Trying to do something like this in silicon is difficult, and doing it in an FPGA is just plain insane. Delays in a circuit vary with just about everything: power supply voltage (and noise), temperature, different chips, the current state of the circuit, and so on. While you might be able to deal with these problems in a custom chip, an FPGA was never designed to be stable in these respects. Also mentioned is that there are several cells in the circuit that appear to have no real use, but when removed, the circuit ceases to operate. As they mention, this could be because of electromagnetic coupling or coupling through the power supplies. Again, I would never want to see something like that in one of my chips.

Another thing that bothers me, how the heck does he know which cells are being used? Last time I checked, the bitstream (programming) files for these chips is extremely proprietary, and nobody (except XILINX) has the formats for these files. I really want to know how they know how this thing is wired.

Now I should mention, this is pretty cool from an academic standpoint, and it would be interesting if they could produce something that is both stable and useful using these techniques. It's also pretty cool that they could get this to work at all.

Re:Not exactly practical... (2)

ddent (166525) | more than 12 years ago | (#2761867)

Actually, AFAIK, Intel is working towards asynchronous chip design. There is a quote by an intel spokesman saying that if another company had a completely asynchronous chip designed which could function at somewhat the rate of their later chips, Intel would be toast. In fact, the P4 is a move towards asynchronous design - IIRC, some parts of it are, or its a design which will be more usable in an asynchronous fashion.

Evolvable Hardware Not New (2, Informative)

piehole (512670) | more than 12 years ago | (#2761785)

Several people, including James Foster [uidaho.edu] at the University of Idaho have been doing this kind of thing for a while. He got some really interesting results, including circuits evolved to take advantage of quantum effects and highly temperature dependant circuits. Actually, the gist of his work is that there are some severe limitations to this approach. There are references for papers on his web page.

Where do you get them? (2)

redcliffe (466773) | more than 12 years ago | (#2761788)

These FPGA's sound pretty interesting, where do you get them? Could one build a useful, interesting homebrew computer with them? Thanks,

David

Re:Where do you get them? (1)

falzer (224563) | more than 12 years ago | (#2761878)

Well, where do you buy your electronics from?

GIGO (1)

Bsobla (262180) | more than 12 years ago | (#2761804)

Any AI with a FPGA starts with a limited number of inputs:
x inputs with values 0 or 1 (or neither) == 2^x (++) possibles.
Any AI with a FPGA ends with a limited number of outputs:
y outputs with values 0 or 1 (or unknown) == 2^y (++) possibles

Inputs-to-Outputs are linked (joined, coupled, etc) by the logic between them (a 'black-box' so-to-speak). An 'evolve' can never happen without linking output to input (feedback).
So, all AI inputs/outputs are constrained by their outputs and their sampling periods.

So for some boxes, an input of (properly encoded)"what is the meaning of the universe?" will return "43". After tuning, these boxes may produce "4f*@#%(#@" or perhaps "forth", "For Linux", "forsooth, BG", "for you use..." or "for more useless answers, call your ISP, then ask BG; if in doubt, ask your mother".

This box apparently returned a tone.
Hmmm...
In the christmas/new year tradition...
this is true intelligence.

Nothing new (1)

matrix0040 (516176) | more than 12 years ago | (#2761815)

Evolving hardware is nothing new. earlier it was done in software infact there's a whole book on VLSI design using genetic algoriths (sorry don't remember the author)> Work on reconfigurable hardware has been going on for a long time now. here's one reference: http://www.work/research/nichol2full.html [caltech.edu]

Science of Discworld has the same story (0)

Anonymous Coward | more than 12 years ago | (#2761855)

There's an account of the same story in Science of Discworld (by Terry Pratchett, Ian Stewart and Jack Cohen) in the chapter where Ian & Jack tell about 'real' evolution, genetic codes, DNA and Darwin.

Magic? (2)

r2ravens (22773) | more than 12 years ago | (#2761872)

And get this: Evolution had left five logic cells unconnected to the rest of the circuit, in a position where they should not have been able to influence its workings. Yet if Thompson disconnected them, the circuit failed.

I only have one thing to say:

Magic :: More Magic

For those unfamiliar with the story. [tuxedo.org]

Skrodes? (1)

nartz (541661) | more than 12 years ago | (#2761873)

Sounds like the skroderider's skrodes from Vernor Vinge's "A Fire Upon the Deep". No one could explain how they worked, or what any individual piece of the machine did, but it all worked. Kinda cool.

Ethical considerations as suggested by STNG (1)

TraceProgram (171114) | more than 12 years ago | (#2761884)

There was an episode of STNG in which a group of special "adaptive" robot-like drones evolved an awareness and Data tries to save them when they are put in danger. The problem encountered in the episode was an ethical one. It asked the crew to look at what was considered intelligent, aware life for a machine. It should be a while before we are faced with such a problem, but it still doesn't mean we shouldn't be asking some questions.

Personally I can't wait for more and more of these systems to be designed and to see how they act and react. If the statement is accurate that only a third of the circuits of a human designed chip were used then this is a potentially incredible resource. Drawing again from my Sci-Fi background, if you look at Issac Asimov's robot's books you will find a short about a AI Brain that was used to create the first hyperdrive ship. While only science fiction, a computer has the advantage of being able to look at all possible known rules, be able to test its environment and summarily report back on a problem that it is given. Seeing what humans may not be able to consider, because we just don't have the perspective, is what makes these systems really valueable. In no time computers like these evolving ones will be giving scientists new puzzles to solve, and a challenged scientist is a happy one (most of the time :)

Recommended Reading... (1)

dFaust (546790) | more than 12 years ago | (#2761899)

I can't help but be reminded of Pierre Ouellette's The Deus Machine (Random House, ISBN: 0679424075).

Quite a good read, at the heart of which lies a computer which is constantly redesigning itself to make it better, evolving well beyond the point where humans understand how it works. Eventually (not a spoiler) it even decides to stop evolving... it concludes each time it designs new hardware to replace itself, it is essentially "killing" itself.

Great read (it deals with much more, such as some twisted biological mutation and perverse, sadistic madmen), though I think I'd like to keep it science fiction.

it begins (1)

degauss (88443) | more than 12 years ago | (#2761914)

well.. it looks as if the matrix could happen now. since computers can improve themseves, it's only a matter of time before they discover they don't need us and enslave us to be energy generators for their cycles...
crud

I got dibs on being CowboyNeal in the new virtual world... as long as I'm going to be the robot's whipping boy, why not slashdots as well?

Good Heavens! (1)

donny (165416) | more than 12 years ago | (#2761932)

This guy has already been featured (at least) twice before on Slashdot! The exact same article is even mentioned in one of them.

Check these out:

http://slashdot.org/article.pl?sid=99/08/27/123821 3 [slashdot.org]

http://slashdot.org/article.pl?sid=01/04/10/043622 0 [slashdot.org]

Stop the insanity!

Donny

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...