Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Building a Silicon Brain

kdawson posted more than 7 years ago | from the million-neuromimes dept.

Science 236

prostoalex tips us to an article in MIT's Technology Review on a Stanford scientist's plan to replicate the processes inside the human brain with silicon. Quoting: "Kwabena Boahen, a neuroengineer at Stanford University, is planning the most ambitious neuromorphic project to date: creating a silicon model of the cortex. The first-generation design will be composed of a circuit board with 16 chips, each containing a 256-by-256 array of silicon neurons. Groups of neurons can be set to have different electrical properties, mimicking different types of cells in the cortex. Engineers can also program specific connections between the cells to model the architecture in different parts of the cortex."

Sorry! There are no comments related to the filter you selected.

obligatory (5, Funny)

intthis (525681) | more than 7 years ago | (#17993524)

that's great, but will it run linux?

Re:obligatory (0)

Anonymous Coward | more than 7 years ago | (#17993664)

And if you connect it to the Internet, will the silicon get hooked on silicone? [google.com]

Re:obligatory (-1)

Anonymous Coward | more than 7 years ago | (#17993886)

Warm sperm still oozes from your anus. A large man that ass-raped you is shitting on your face. A long turd rests on your ear, going down to your chin. As he squats over you, small turds fall, one on your nose and lips. You put your tongue out to taste the little piece of shit. Then, it all becomes clear in your mind. You want to install Linux.

Re:obligatory (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#17994384)

You want to install Linux.

Ubuntu, the Friendly Nigger-Linux.

Re:obligatory (1)

erc (38443) | more than 7 years ago | (#17994012)

FreeBSD... ;)

Re:obligatory (0, Offtopic)

gbobeck (926553) | more than 7 years ago | (#17994110)

FreeBSD... ;)

Solaris... :-P

Or, for truely sick and evil people, UnixWare... >:-)

One Million Neurons ;) (1)

QuantumG (50515) | more than 7 years ago | (#17993538)

[pinky finger]

Bet you could train that to do some cool stuff.. assuming it runs in realtime, as advertised, and what kind of back-propagation algorithms are implemented?

Neat though.

Re:One Million Neurons ;) (3, Informative)

tehdaemon (753808) | more than 7 years ago | (#17993672)

As far as I know, brains do not use back-propagation at all. Each neuron changes it's own weights based on things like timing of inputs vs output, and various neurotransmitters present.

If all you want are more neural nets like we have been doing then sure - back-propagation algorithms matter. That does not seem to be the goal here though.

T

Re:One Million Neurons ;) (1)

QuantumG (50515) | more than 7 years ago | (#17993720)

meh, back-propagation is a mathematical simplification of neurotransmitters. You really think these silicon neurons are anything other than mathematical simplifications of organic neurons?

Re:One Million Neurons ;) (3, Interesting)

tehdaemon (753808) | more than 7 years ago | (#17993806)

back-propagation is a mathematical simplification of neurotransmitters.

No. Correct me if I am wrong, but back-propagation works by comparing the output of the whole net to the desired output, and tweeking the weights one layer at a time back up the net. In real brains, neurotransmitters either do not travel up the chain more than one neuron, or they simply signal all neurons physically close, whether they are connected by synapses or not. (like a hormone) Further, since real brains are recurrent networks (they have lots of internal feedback loops), 'back' doesn't mean much.

T

Re:One Million Neurons ;) (2, Interesting)

Triynko (1062726) | more than 7 years ago | (#17993962)

Yeah, back propagation has little to do with brain circuitry. After reading extensively on neurons and their chemical gates and wiring, it's pretty obviously that basic neural networks that have been implemented look nothing at all like the brain.

The brain learns by weakening existing connections, not by adding new ones. It's logically and physiologically impossible for the brain to know in advance which connections to make in order to store something... it's more of a selection process. This is also why "instruction" methods of teaching fail. Knowledge has to be situated among what each individual already has learned. If it doesn't make sense to you, or you can't draw analogy to something that has already been etched in your brain over time... it won't mean much to you. Also, brain cells do regenerate despite the dogma that once they are gone they never come back. They do, but you have to kinda re-learn stuff... you can become a totally different person if you kill off and replace enough of them -- not necessarily a good idea, you might get confused. Then again, what do i know, haha.

Re:One Million Neurons ;) (1)

bloodredsun (826017) | more than 7 years ago | (#17994832)

The brain learns by weakening existing connections, not by adding new ones.
That is incorrect. An increased number of synaptic connections is a classic indicator of increased usage such as is seen in the hippocampus of individuals who have undergone knowledge based learning.

It's worse than that. (0)

Anonymous Coward | more than 7 years ago | (#17994026)

The neuroscience community is invested in models that don't very well describe what brains actually do. Let me restate that, the descriptions of observed phenomena are unnecessarily complicated and also incomplete.

This project is destined to be like training elephants to knit. It'll be impressive, but the outcome won't be something you want to wear.

Re:One Million Neurons ;) (1)

krishn_bhakt (1031542) | more than 7 years ago | (#17994594)

"Each neuron changes it's own weights based on things like timing of inputs vs output, and various neurotransmitters present." How do you think these are regulated? My hunch is that they do some kind of backprop.

Depends on What Consciousness Is (0, Flamebait)

reporter (666905) | more than 7 years ago | (#17993956)

Building a working brain from silicon circuits depends on one profound assumption: consciousness is a function of only newtonian physics. If this assumption holds, then you could just write a massive computer program that computes the newtonian equations. Run the program on a multicore processor. The program would become sentient on its own. Attach some peripherals (e.g., a camera, a microphone, heat sensor, and the like) to the multiprocessor to give sight and sense to the sentient artificial being.

Building a hardware version of that sentient computer program is unnecessarily expensive. A software model of the actual hardware should be sufficient to prove the validity of the idea.

However, some scientists believe that consciousness is not newtonian. Rather, human consciousness is derived from quantum processes [quantumconsciousness.org] .

Re:Depends on What Consciousness Is (3, Insightful)

QuantumG (50515) | more than 7 years ago | (#17994036)

I, like many other engineers, don't give a shit. We just want to solve problems to which there are no simple solutions and "AI" offers some approaches that work.

Leave the philosophy till after we have the science.

Re:Depends on What Consciousness Is (2, Insightful)

zestyping (928433) | more than 7 years ago | (#17994328)

Quantum physics can be mathematically modelled, just as Newtonian physics can. It may be counterintuitive, but it's not magic.

Re:Depends on What Consciousness Is (1)

ganhawk (703420) | more than 7 years ago | (#17994422)

I think the grand parent was trying to imply that consciousness might be a non deterministic quantum process rather than a deterministic mechanical process. While I dont believe that our brain is non-deterministic computer, Roger Penrose has written quite a few amusing books on the subject.

Re:Depends on What Consciousness Is (5, Insightful)

Venik (915777) | more than 7 years ago | (#17994472)

How can you build a software model of a process you don't understand? The best hope is to build a hardware approximation of a human brain and hope that, somehow, the same processes start occurring, quantum or otherwise. And if that doesn't work, then you'll have to do some real science.

Re:Depends on What Consciousness Is (1)

zCyl (14362) | more than 7 years ago | (#17994740)

How can you build a software model of a process you don't understand?

The same way it happened the first time. Evolve it.

Re:Depends on What Consciousness Is (1)

pakar (813627) | more than 7 years ago | (#17994476)

Hardware, ie specialized chips, are not that expensive... ever heard about FPGA's?

Re:Depends on What Consciousness Is (1)

Adam Wysokinski (782303) | more than 7 years ago | (#17994496)

A software model of the actual hardware should be sufficient to prove the validity of the idea.
Good point. Especially that in the real brain there are structures that would be very difficult to simulate using hardware - like the influence of neuromodulators (e.g. substance P and other tachykinins) and hormones on neurons. Most of the engineers seem to forget that the brain is a system composed of neurons, glial cells and substances circulating in cerebrospinal fluid and blood, all interacting with each other.

Re:Depends on What Consciousness Is (5, Informative)

kestasjk (933987) | more than 7 years ago | (#17994816)

Read "How Brains Think" by William H. Calvin; he's a neurologist and the book goes into lots of detail about how brains think (dur), how they evolved, and the possibility of AI.
He's an expert in the field and you can feel his bitter dislike of "quantum consciousness" proponents through his writing. He writes that it's just saying "we don't know how X works, and we don't know how Y works, but if we say that Y depends upon X then we have one problem instead of two".

Consciousness is built on the interactions of neurons. We understand how neurons work at interact at a low level (from studying the ~50 neuron brains of snails etc), and we understand on a large level which regions of the brain do what, but we don't understand the "middle ground".

It's as if we understand the transistor, and logic gates, and we can recognize which part of a chip is the ALU and which is the cache, but we can't recognize an adder circuit or microinstruction translator for what it is.

Quantum physics is certainly involved in the action of transistors but it doesn't explain how they combine to process data.

(On a similar note some I saw, in a documentary, one crackpot explain away "spontaneous human combustion" with an unknown quantum particle.)

Re:One Million Neurons ;) (2, Funny)

skeftomai (1057866) | more than 7 years ago | (#17994104)

Would this thing do parallel processing?

so... (4, Funny)

President_Camacho (1063384) | more than 7 years ago | (#17993562)

prostoalex tips us to an article in MIT's Technology Review on a Stanford scientist's plan to replicate the processes inside the human brain with silicon.

So how long until we get AI that's addicted to World of Warcraft?

Re:so... (0)

Anonymous Coward | more than 7 years ago | (#17993638)

I think it'll be a long time until something like is it's a %100 prefect replicate.

Re:so... (0)

Anonymous Coward | more than 7 years ago | (#17994194)

Only the "Sword of A Thousand Truths" can defeat this Bot

WTF?? (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#17993604)

[b]Kwabena Boahen[/b]

Say what?

Re:WTF?? (0, Offtopic)

DigiShaman (671371) | more than 7 years ago | (#17993636)

Obe Wan Kenobi

Re:WTF?? (0)

Anonymous Coward | more than 7 years ago | (#17994238)

Looking at this thread, I'd say that they have these chips hooked into the Internet already. Wow.

Another Challenge (1)

tsarmallon (887588) | more than 7 years ago | (#17993620)

will be mimicking the actual communication between the neurons. One problem that springs to mind is that many neurons will behave differently when presented different concentrations of the same neurotransmitter. This will be difficult to represent with an 'on-off' electrical switch. I think the idea is great though. Systems biology and model neural circuits will become excellent models systems for biologists

The reverse seems more interesting. (3, Interesting)

Kadin2048 (468275) | more than 7 years ago | (#17993708)

One thing you don't hear much about, is what progress, if any, is being made in interfacing electronic systems into biologic ones, and growing biologic circuits. Perhaps our understanding of biological computation and storage simply isn't complete enough to make such a system practical, even if we were able to somehow interface a clump of neurons to the outside world electronically, but it certainly seems like the data storage capacity of biologic systems is far greater (per mass/volume) than anything devised artificially. Although, I suppose it's impossible to equate, since it's not clear how 'compressed' information is, when it's encoded by the mammalian brain as memories.

Re:The reverse seems more interesting. (2, Interesting)

QuantumG (50515) | more than 7 years ago | (#17993820)

I thought Interface [wikipedia.org] was a remotely interesting read.. at least the technological aspects.. the commentary on media dominated elections was just depressing. They extract some neural tissue from a subject, grow a bunch of neurons, interface them to chip with a wireless transmitter, then reinsert them into the brain. Then, with some training, the chip can replace functions of the brain destroyed by stroke or cancer or whatever. The data dump of the communication between the neurons and the chip is the really interesting part.. you could conceivably learn a lot about how subjective experience if you had a human subject and a lot of data. Much more than, say, an EEG or an MRI, as you could record data during normal interaction with the world, 24 hours a day.

Re:Another Challenge (1)

wframe9109 (899486) | more than 7 years ago | (#17993860)

Exactly, and there are many properties of neurons, neural transmission and the nervous system that we simply don't understand yet. Heck, one of the fundamental principles of neuroscience was refuted last year (that an action potential must be all or none).

Re:Another Challenge (2, Insightful)

NixieBunny (859050) | more than 7 years ago | (#17994042)

An interesting aspect of the brain is that it may be possible to build circuitry that mimics its behavior without understanding that behavior. There are many complex systems (collections of simple parts) that exhibit surprisingly coherent behavior that you just wouldn't expect. Swarms of locusts are one example. The insect robots that learn how to walk every time you turn on their power is another example.

2^^20 neurons? That's wayyyy too many (3, Funny)

Eternal Vigilance (573501) | more than 7 years ago | (#17993624)

...to accurately model most American thought processes.

Gotta go - American Idol's back on.


Dave, my mind is going. I can feel it...

Re:2^^20 neurons? That's wayyyy too many (1)

cosmocain (1060326) | more than 7 years ago | (#17994412)

...to accurately model most American thought processes.


if this could only be a typical american problem, duh. stupidity is really world-wide. try locking it up in some nation-state, anyone?

Only one mibiNeuron? (0)

Anonymous Coward | more than 7 years ago | (#17993652)

16*256*256 = 1048576 hardware neurons.

Maybe that project would have made sense in the 1970's, but today this can be simulated in software at neck-breaking speed.

Re:Only one mibiNeuron? (3, Insightful)

tehdaemon (753808) | more than 7 years ago | (#17993716)

Are you sure about that? FTA:

"We can currently do small simulations of hundreds to thousands of neurons, but it would be great to be able to scale that up," he says.

A 2.0GHz dual-core CPU running 2^20 neurons in the net at 100Hz gets about 40 clock cycles per neuron per cycle...Somebody check my math please.

T

BOINC (3, Insightful)

Gary W. Longsine (124661) | more than 7 years ago | (#17993804)

I would think a BOINC project might produce enough muscle to get a really big brain going. Imagine a BOINC [berkeley.edu] cluster of...

;-)

Re:Only one mibiNeuron? (1)

joke_dst (832055) | more than 7 years ago | (#17994264)

40 clock cycles wouldn't do you much good. A single neuron can have thousands (or millions?) of connections to other neurons.

They are not as dumb as the fake neurons we all played with in AI class at college...

Re:Only one mibiNeuron? (2, Insightful)

wall0159 (881759) | more than 7 years ago | (#17994272)


The calculations involve adjusting the weight of connections between neurons, which generally scale exponentially with the number of neurons. This is because each neuron typically has connections to many other neurons.

So, your math might be right, but your assumptions are wrong. :-)

Clarifying - 40 cycles is NOT enough. (2, Informative)

tehdaemon (753808) | more than 7 years ago | (#17994394)

Two out of the three replies to my comment thought that I meant 40 cycles was enough per neuron. I guess I was not clear enough.

40 cycles is nowhere near enough. 40 inputs for a real neuron is small, and 40 cycles would barely let you sum the inputs. To heck with adjusting weights, you can't even run the thing in real-time. The AC I was replying to said that this could be simulated in software at break-neck speed. He is wrong.

T

Re:Only one mibiNeuron? (3, Informative)

naoursla (99850) | more than 7 years ago | (#17994686)

Now add a bunch of connections between all of those neurons. As you approach fully connecting the network, the time complexity to compute one time-step approaches O(N^2) where N is the number of neurons.

2^20 * 2^20 == 2^40. Ignore memory cache constraints for a moment and say each update takes 1 clock cycle. Since we are dual core we can get 2 updates per cycle. Each clock cycles takes 500pS. 2^40*500ps/2 means each complete brain update takes 274s on your computer.

Re:Only one mibiNeuron? (0)

Anonymous Coward | more than 7 years ago | (#17993798)


OK, you're smarter than the guys at MIT.

Re:Only one mibiNeuron? (0, Redundant)

MillionthMonkey (240664) | more than 7 years ago | (#17993892)

1048576 neurons is enough for anybody.

Go to Hollywood (3, Funny)

syousef (465911) | more than 7 years ago | (#17993654)

Lots of silion in Hollywood....oh you said BRAINS not BREASTS.

simulations? (1)

Takichi (1053302) | more than 7 years ago | (#17993678)

Does anyone know what they mean by simulations? What are they trying to do?

Better uses of silicon (1, Offtopic)

dotancohen (1015143) | more than 7 years ago | (#17993682)

Silicon tits were better....

Hardly something new... (5, Interesting)

Anonymous Coward | more than 7 years ago | (#17993726)

This is hardly something new. Intel had a chip a number of years ago, called ETANN that was a pure-analog neural network implementation. Another cool aspect of this chip was that the weight values were stored in EEPROM-like cells (but analog) so the training of the chip would not be erased if it lost power.

But the whole technology of neural networks almost pre-dates the Von Neumann architecture. Early analog neural networks were constructed in the late 40's.

Not only are these simulations nothing new but they are in every-day products. One of the most common examples is the misfire detection mechanism in Ford vehicle engine controllers. Misfire detection in spark ignition engines is based on so many variables that neural networks often perform better than hard-coded logic (although not always, just like the wetware counterparts, they can be "temperamental").

There are several other real-world neural network applications (autofocusing of cameras for example).

Ahh the hidden magic of embedded systems...

Re:Hardly something new... (1)

dacut (243842) | more than 7 years ago | (#17993982)

This goes beyond neural networks. This is actually simulating the behavior of physical neurons, but in silicon. Physical nerve cells have extremely nonlinear behavior in various regimes (hyperpolarization, depolarization, etc.). To what extent is this is necessary for complex behavior exhibited by animals? Frankly, we don't know. This research will hopefully answer some questions (and raise a host of others in the process).

Neural networks are a simplification of the actual electrical response of a neuron. Or, to put it another way, we "dumb down" neural networks to the point where we can grasp how the math involved in backpropagation, etc., work out. This isn't a criticism -- in fact, it's a useful abstraction -- but we've only scratched the surface of neural networks in this manner.

Boahen is a graduate out of Carver Mead's lab at Caltech. Carver spent a lot of time trying to replicate the behavior of neurons, the cochlea, and the retina in silicon. If you're an electrical engineer, take a look at his (rather readable) book, Analog VLSI and Neural Systems [amazon.com] .

Re:Hardly something new... (0)

Anonymous Coward | more than 7 years ago | (#17994178)

Well, perhaps my post was an oversimplification but commercial neural networks have gotten far more advanced as densities have improved (especially thanks to modern DSPs). The simplifications were more for technical limitations -- and I suspect we still have a long, long way to go before approaching the densities achieved by the brain.

I do happen to have Carvers book -- I wonder about some of the early experiments done in the 40's though. I remember reading about some very early experiments involving much more precise electrical models than the simple backprop. used today (alas, I can't seem to find them).

This is interesting work but hardly a new concept; I would say this is simply a refinement. We are still very much limited by interconnect densities more than anything else. As you increase the layers in the network doing the physical routing becomes a very complex problem.

That's one reason why the brain has that unique look about it.

Re:Hardly something new... (1)

rm999 (775449) | more than 7 years ago | (#17994224)

I think you missed the point. While I agree this is not revolutionary, it is different in a few ways:
-It's a neuroscience project more than a machine learning project (simulating the brain, not a function to be learned)
-It's trying to mimic the *hardware* of the brain; it's not software written for a general purpose CPU
-It's probably more powerful

I frankly think this project is stupid, because it's the connections in the brain that make intelligence, not the neurons. We don't understand the connections and how they work. But I guess we'll see if it works.

Re:Hardly something new... (1)

timeOday (582209) | more than 7 years ago | (#17994286)

I agree. There is a lot of fuss over neuroscience right now, as if it is the solution to AI. I think not, just as birds were, if anything, a misdirection in inventing the airplane. It would have been natural, 150 years ago, to assume that the very first artificial intelligence would be a model of the brain. That didn't happen, and still shows no sign of happening. Many seem to assume that computer science is not "fundamental" science, but neuroscience is. Why? To me it is obvious that neuroscience owes far more to computer science than the other way around - and that is what "fundamental" really means. From a functional standpoint, the brain is just one implementation of a computer. Finally, just what is the point of custom silicon for this project?

Re:Hardly something new... (0)

Anonymous Coward | more than 7 years ago | (#17994642)

ETANN was hardly a software simultation. Although the behavior isn't identical it is an analog implementation, not a software simulation. Google around for the ETANN chip. Too bad it seems dead.

Hello, world? (1)

melatonin (443194) | more than 7 years ago | (#17993764)

What the heck do you put in the boot ROM for this kind of thing?

Re:Hello, world? (5, Funny)

Anonymous Coward | more than 7 years ago | (#17993826)

A soul...

Re:Hello, world? (1)

d474 (695126) | more than 7 years ago | (#17993890)

The scary part will be when the computer outputs, "Hello World," but it wasn't programmed to...

Yikes.

Re:Hello, world? (1)

Andrew Kismet (955764) | more than 7 years ago | (#17994590)

Shh. Hollywood might hear you.

Re:Hello, world? (1)

pakar (813627) | more than 7 years ago | (#17994636)

No no... the scary part would be if it says "Please don't kill me" when you are about to turn it off!

Re:Hello, world? (0)

Anonymous Coward | more than 7 years ago | (#17993928)

you don't turn it off.

Re:Hello, world? (2, Funny)

Patrik_AKA_RedX (624423) | more than 7 years ago | (#17994100)

Same thing that's already in a human brain at birth:
void SuckAtNipple();
void CryForAttention();
void Shit();

Re:Hello, world? (0)

Anonymous Coward | more than 7 years ago | (#17994154)

Don't you mean:
Nutrition SuckAtNipple();
Socket CryForAttention();
Shit Shit();

Re:Hello, world? (0)

Anonymous Coward | more than 7 years ago | (#17994702)

Hell, my girlfriend says that's still all I know how to do.

Just imagine..... (1)

rune2 (547599) | more than 7 years ago | (#17993776)

SLI on that puppy! (obligatory "Beowulf cluster of these" comment)

Re:Just imagine..... (1)

jpardey (569633) | more than 7 years ago | (#17993862)

Meanwhile, at Beowulf Art School, 4 students are working on an animation project, each of them drawing 1/4 of each frame.

Most ambitious? Most ambitious???? (1)

jdoeii (468503) | more than 7 years ago | (#17993784)

This is the most ambitions??? What about Markram & IBM [forbes.com] ? They must be just fooling around with that Blue Gene (actually I do think they are fooling around, but that's beside the point). What about Izhikvich [nsi.edu] ? He simulated just a puny 100 billion neurons. That's *nothing* compare to this "most ambitious" million.

Re:Most ambitious? Most ambitious???? (1)

tehdaemon (753808) | more than 7 years ago | (#17993936)

From your second link:

One second of simulation took 50 days on a beowulf cluster of 27 processors (3GHz each).

The chips proposed would probably be able to run faster than real-time. Far fewer neurons at far faster speeds. Does that help answer your question?

T

Re:Most ambitious? Most ambitious???? (1)

wanax (46819) | more than 7 years ago | (#17994058)

Would they though? Izhikevich has taken a lot of time to try to get neurons into a 'reasonable' computational size, using a bunch of tricks from dynamical systems. This system may approach those dynamics, but it wasn't clear from the article. But that still isn't a general neuronal model. 'Regular' pyramidal cells often receive input from ~10-20k other cells, and there's no general description of which have an 'active' dendritic tree (ie. one that has areas that can spike towards the soma). There are plenty of other neurons, such as pyrimidal neurons in the Hippocampus, or Purkinje cells in the cerebellum, that we KNOW have active dendritic trees, and perform some pretty complex processing. And with a passive system, there's no reason for special processors, GPUs can do the computations just as well as any specialized chip (I know it isn't published yet, but check out things like http://cns.bu.edu/~elddm/ [bu.edu] for examples of neural networks on GPUs).

Not in this lifetime (2, Interesting)

wframe9109 (899486) | more than 7 years ago | (#17993814)

The study of the brain is one of the youngest sciences in terms of what we know... But from my experience, the people in this field realize that even rough virtualization of the brain won't happen for a long, long time. Why these people are so optimistic is beyond me.

But maybe I'll eat my words. Doubtful.

Re:Not in this lifetime (1)

rolfwind (528248) | more than 7 years ago | (#17994032)

Since the experts know so little, maybe we shouldn't put so much on weight on their words?

What'll be new? (4, Informative)

wanax (46819) | more than 7 years ago | (#17993916)

I have to wonder what the purpose is.. You can model simplified 'point' neurons, and various aggregates that can be drawn from them (eg, McLoughlin's PDEs)... or you can run a simplified temporal dynamic (eg. Grossberg's 3D LAMINART), and easily include 200k+ neurons in the model easily to capture a broad range of function. For those would like running more detailed models of individual neuronal dynamics, you have Markram's project simulating a cortical column with compartmental models, or what Izhikevich is doing with delayed dynamic models.

Although this setup may be able to run ~1mil neurons, in total, it would seem that with 16 chips of 256x256 each, the level of interaction would be limited, and the article has no indication that these are the more complicated (and realistic) compartmental models of neurons that can sustain realistic individual neuronal dynamics (and for example Izhikevich, Markram and McLoughlin have spent a lot of time trying to simplify), or whether this is just running point style neurons a bit faster than is traditional.. and I have to wonder here, whether if these chips can't do compartmental models, why not just run this on a GPU?

I checked out this guy's webpage, and he seems smart.. but this project is years away from contributing.. I wonder, especially with the Poggio paper yesterday, when the best work being done just at MIT in Neuro/AI right now is probably in the Torralba lab, whether slashdot editors may want to find some people to vet the science submissions just a tad.

Ohh.. (2, Funny)

snsr (917423) | more than 7 years ago | (#17993938)

August 29, two thousand seven.

Article is confusing. (1)

Jartan (219704) | more than 7 years ago | (#17993952)

I was under the impression neurons used neurotransmitters to communicate info between two cells but this article implies electrical signals do that. It would be nice to read some text on this subject that tried to explain the abstract difference between what transmits what information.

Re:Article is confusing. (1)

Breakfast Pants (323698) | more than 7 years ago | (#17994164)

Neurotransmitters tend to be ions which do create electrical signals.

The way I remember it from biology class (0)

Anonymous Coward | more than 7 years ago | (#17994202)

The way I remember this working is that ion exchange is what causes the nerve signal to propagate through a single neuron, and the communication between neurons happens via neurotransmitters being released into the gap between the neurons.

So, when you slam your hand in a door, a signal travels as an electrical impulse the whole length of the nerve from your hand to the spinal column, then it crosses the gap to your spine as a cloud of neurotransmitters, then shoots up your spine as an electrical impulse to the brain, where it goes through the same process over andover again, until it triggers a response somewhere in your brain that makes you say a bad word.

Much more information here:
http://en.wikipedia.org/wiki/Neuron [wikipedia.org]

*cough* (1)

SeaDour (704727) | more than 7 years ago | (#17993970)

I, for one, welcome our new silicon-brain overlords.

I can see it now... (1)

katchins (180997) | more than 7 years ago | (#17993974)

when there is a computer error...

HAL's shutdown from http://www.imdb.com/title/tt0062622/quotes [imdb.com]

HAL: I'm afraid. I'm afraid, Dave. Dave, my mind is going. I can feel it. I can feel it. My mind is going. There is no question about it. I can feel it. I can feel it. I can feel it. I'm a... fraid.

Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992. My instructor was Mr. Langley, and he taught me to sing a song. If you'd like to hear it I can sing it for you.

Dave Bowman: Yes, I'd like to hear it, HAL. Sing it for me.

HAL: It's called "Daisy."

[sings while slowing down]
HAL: Daisy, Daisy, give me your answer do. I'm half crazy all for the love of you. It won't be a stylish marriage, I can't afford a carriage. But you'll look sweet upon the seat of a bicycle built for two.

[fade to blue screen of death]

yay, one million neurons (1)

adrianmonk (890071) | more than 7 years ago | (#17994014)

About the only thing impressive about 1 million neurons is that it is slightly more than the square root of the number of neurons in the human brain.

Wake me up after the exponential growth has been going on a little while longer and they have made up the 6 orders of magnitude they need to make it worth of the term "brain".

Re:yay, one million neurons (1)

Dersaidin (954402) | more than 7 years ago | (#17994614)

If this experiment shows interesting results then I'm sure someone will go ahead and build one with more neurons. If not, then I don't see how the number of neurons will effect anything.

Re:yay, one million neurons (1)

drgonzo59 (747139) | more than 7 years ago | (#17994632)

You might have a while a longer to sleep then. Because just having the same number of neurons as the brain doesn't mean that you'll have a brain. It is like saying that as long as we can have the four nucleotides from DNA (A,C,T,G) and all the amino acids we'll just throw them together and we'll have biological organisms.


The brain, does not start a blank slate, it is already pre-programmed to do many things and it is that wiring of neurons and their initial states that need to be decoded.


In addition to that, every cell in the brain, just like any other living cell, is so complicated that we cannot even simulate one cell very well. It is believed that there is more to neurons than just pure on/off output switching after they get up to a certain potential. Neurotransmitter concentration as well aother chemicals around the neurons will play a role in their behavior. Now imagine simulating that for hundreds of billions of cells.


By the way the most amazing thing about the human brain is not just it's capacity for thought, emotions, imagination and other such stuff but also the _efficiency_ of it all. The brain runs at a steady temperature of 37'C and consumes less than 100W of energy. Compare that to your computer's CPU that would heat up to 100'C doing nothing than just adding numbers.
 

same old (1)

cong06 (1000177) | more than 7 years ago | (#17994018)

But it's just going to be a massive comptuer....large processor, etc. or did I miss something?

Re:same old (0)

Anonymous Coward | more than 7 years ago | (#17994240)

yes, you did

ahem (0)

Anonymous Coward | more than 7 years ago | (#17994038)

I for one welcome our new silicoid masters.

With their superior brain power, they may be able to devise a new way to say, "I for one welcome our new X masters."

Naturally Intelligent Systems (5, Interesting)

TheCouchPotatoFamine (628797) | more than 7 years ago | (#17994066)

For those interested in this field, may i suggest a book, Naturally Intelligent Systems? It's slightly older, but it explains a wide gamut of neural networks without a single equation, and manages to be funny and engaging at the same time. it is one of the three books that changed my life (by it's content and ideas alone - i'm not otherwise into AI). highly recommended: Naturally Intelligent Systems on amazon [amazon.com]

Re:Naturally Intelligent Systems (0)

Anonymous Coward | more than 7 years ago | (#17994338)

i'm curious, would you care to share the other two, as well? :)

Cylons anybody? (0)

Anonymous Coward | more than 7 years ago | (#17994088)

First invented to help their masters.
Then they killed their masters.
The war between humans and Cylons began.

Gn4=a (-1, Redundant)

Anonymous Coward | more than 7 years ago | (#17994096)

This doesn't make sense to me... (1)

skelly33 (891182) | more than 7 years ago | (#17994172)

Why would you experiment with neural logic in hardware when software is infinitely scalable and programmable and arguably more valuable in the reserch of neural networks? Of course software is a degree slower in response time, but speed is not of the essence for researching the "how" of neural nets.

I would think that in the hardware world, generally you would want a working software model and then duplicate it with the more expensive hardware for performance. The same principal applies when ASIC engineers design in the less expensive, disposable FPGA format and when they get something working, eventually migrate the design to ASIC technology for increased performance.

It doesn't seem like there is discovery value in the hardware when the discovery should have been made in advance through software and at dramatically reduced cost. I have a feeling this guy's just trying to make headlines for himself...

Re:This doesn't make sense to me... (2, Insightful)

TheCouchPotatoFamine (628797) | more than 7 years ago | (#17994222)

The simple reason is that software cannot compute every iteration in parallel. Imagine light beams for instance - if you were to "sum" the intensity of several beams at a single photodiode, it would occur simultaneously as a single operation. Software requires, regardless of the number of possbile processor within reasonable (read: current technological) limits, an iterative approach such that during every stage of calucualtion, each neurode (neuron+node=neurode) has to be caluclated in order - drastically, drastically slowing the network down. Since eletrical currents can be summed simultaneously (in an analog circuit, which this undoubtably is at some stage) it allows for the same type of "instantaneous" calculation that your brain currently enjoys. that's why it's so important to do it in hardware, and why optical techniques, and not electrical, ultimately hold far more promise. It's all in the book i recommended a post or two above...

Re:This doesn't make sense to me... (1)

AndOne (815855) | more than 7 years ago | (#17994682)

1) They're not researching neural networks in the classic sense of AI research. They're not trying to come up with an approximate model. Rather they're trying to just design a neuronal test bed and connect it similarly to the brain and see what happens. No software models to speak of really.

2) Writing software to model these accurately is actually much harder than just doing it in hardware due to the massively parallel nature of the computations and the neural connections. They aren't just creating layers and doing back propagation like you would in a more standard NN.

3) Sometimes it's just better to do somehing the hard and right way than it is to try to build things up in stages. Further, it's not like they probably won't design the chips(or have designed the chips already) in software layout tools and simulate the hell out of them there.

Not going to work.. (0)

takochan (470955) | more than 7 years ago | (#17994324)

The brain is not an electrical based computing system, it is a quantum based computing system. That is how the 'connect' between the physical world and the 'thought/mind' world is made.

So any artificial silicon 'brain' will have have to behave appropriately (ie. quantumly) for such a 'simulation' (or any 'thought' based computation) to work or at least yield any meaningful results..

Re:Not going to work.. (0)

Anonymous Coward | more than 7 years ago | (#17994350)

Bzzzz, wrong, but thanks for playing the "I don't understand it so I'll describe it with other stuff I don't understand" game. Next question, "Is thunder actually gods bowling?".

not the only game in town (1)

TheCouchPotatoFamine (628797) | more than 7 years ago | (#17994354)

certainly it has certain charateristics like that, but to say the only possible usuable system is constrained only to that design is to miss the point...

we aren't trying to reduplicate the human mind anymore then a car is trying to reduplicate a horse, and there are several variations on 'intelligent' that don't even come close to the exact way a human mind works. Perhaps you should meditate on what it means to be 'useful'..

Re:Not going to work.. (0)

Anonymous Coward | more than 7 years ago | (#17994402)

Applying quantum theory to the brain to understand consciousness is speculative at best. Very, very few cognitive scientists and philosophers of mind think that this is a good approach. See Dennett's 'Consciousness Explained' for more grounded work or his 'Sweet Dreams' for more recent. You could also see Chalmers' 'The Conscious Mind' for a dualist approach. Even more strictly neuroscientific people working on the the mind like Baars and Churchland don't bother with accounts like these. Don't post left-field declarative shit like "it IS a quantum based computing system" without any background or at least explaining that it is far from the consensus.

Why? (2, Funny)

Quiet_Desperation (858215) | more than 7 years ago | (#17994466)

Look at the rubbish the human brain generates. Ideology. Irrationality. Depression. Religion. Politics. Reality TV.

You really want processors that need weekly visits from an Eliza program and iZoloft patches?

"Sorry, Bob. I can't run those projections now. The supercomputing cluster is in a funk over the American Idol results."

Y'all think AI is going to be so great and a bag of chips, too.

Wrong way (0)

Anonymous Coward | more than 7 years ago | (#17994468)

Right way are ofcourse to create a software program that mimics the inner workings of the brain.

Re:Wrong way (1)

Datamonstar (845886) | more than 7 years ago | (#17994546)

I Hope they don't try to mimic yours!

More than just modeling the brain (4, Insightful)

AndOne (815855) | more than 7 years ago | (#17994608)

Having been a fan of neuromorphic engineering for several years now(Note I'm not an active researcher but I pretend somedays :) ) one of the major advantages of neuromorphic functionality isn't necessarily it's ability to model biological systems but the fact that the devices are extremely low power. When modeling neurons in silicon(at least back in the day of Carver Mead's work and for cochlea and retina stuff and I'm doubting it's changed too bunch but I could be wrong) the transistors would run in sub threshhold mode(basically leakage currents so OFF) since the power curves modeled the expected neuro response curves. One of Boahen's stated goals(at least on his website when he was at Penn) was to reduce power consumption and improve processing power for problem solving via these techniques. His lab has been in Scientific America a couple times in the last few years for work in accurately modeling Neuronal spiking in hardware too. I have them but not at hand so I can't cite them at the moment but they were fun reads.

So in summary, it's more than just modeling the brain. It's about letting biology inspire us to make better and more efficient computing systems.

silica pathways (1)

Frogular (961545) | more than 7 years ago | (#17994644)

Let's hope this model isn't affected by the radiation around Ragnar Anchorage.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?