Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Supercomputer Simulates Human Visual System

Soulskill posted more than 6 years ago | from the i-can-see-clearly-now dept.

Supercomputing 244

An anonymous reader writes "What cool things can be done with the 100,000+ cores of the first petaflop supercomputer, the Roadrunner, that were impossible to do before? Because our brain is massively parallel, with a relatively small amount of communication over long distances, and is made of unreliable, imprecise components, it's quite easy to simulate large chunks of it on supercomputers. The Roadrunner has been up only for about a week, and researchers from Los Alamos National Lab are already reporting inaugural simulations of the human visual system, aiming to produce a machine that can see and interpret as well as a human. After examining the results, the researchers 'believe they can study in real time the entire human visual cortex.' How long until we can simulate the entire brain?"

cancel ×

244 comments

Sorry! There are no comments related to the filter you selected.

I AM A JEW (-1, Troll)

twiiter (1307333) | more than 6 years ago | (#23785065)

Therefore you are under my control hahaha!

Just one word... (2, Funny)

KGIII (973947) | more than 6 years ago | (#23785089)

Impessive.

Re:Just one word... (0)

Anonymous Coward | more than 6 years ago | (#23785309)

Not really. It just called me ugly.

Re:Just one word... (5, Funny)

SupplyMission (1005737) | more than 6 years ago | (#23785369)

One word? That makes your spelling error rate 100%.

Re:Just one word... (5, Funny)

KGIII (973947) | more than 6 years ago | (#23785443)

That's only 10% lower than my math error rate.

Re:Just one word... (0)

Anonymous Coward | more than 6 years ago | (#23785513)

-giggling-

Re:Just one word... (0, Redundant)

nuzak (959558) | more than 6 years ago | (#23786031)

I'm impessed!

What cool things can be done? (1, Offtopic)

smitty_one_each (243267) | more than 6 years ago | (#23785091)

If it can help me find my socks...

troll (-1, Troll)

Anonymous Coward | more than 6 years ago | (#23785103)

trolly

Re:troll (0)

Anonymous Coward | more than 6 years ago | (#23786117)

library

I suck at remembering faces (1)

oldspewey (1303305) | more than 6 years ago | (#23785139)

Just the way I'm wired I guess, but I've always been somewhat handicapped when it comes to remembering and recognizing faces. I've been told I could never become a politician because of this (no great loss I suppose).

How long until I can plug a computer chip into the back of my head to rectify this?

Re:I suck at remembering faces (1)

Verteiron (224042) | more than 6 years ago | (#23785401)

25 to 30 years, I'd guess.

Re:I suck at remembering faces (1)

jamiesan (715069) | more than 6 years ago | (#23785419)

Thou shalt not make a computer in the likeness of a human mind! Watch out, the butlerian jihadists will get you.

Re:I suck at remembering faces (2, Informative)

klasikahl (627381) | more than 6 years ago | (#23785655)

You likely suffer from mild prosopagnosia.

New goal... (5, Insightful)

dahitokiri (1113461) | more than 6 years ago | (#23785141)

Perhaps the goal should be to make the visual system BETTER than ours?

Re:New goal... (0, Troll)

twiiter (1307333) | more than 6 years ago | (#23785155)

This will aid our production of super human for Jew World Domination!

Oh, this is a fine how do you do (0)

Anonymous Coward | more than 6 years ago | (#23785487)

Now twitter's enemies are using sock puppets too. I'm convinced it's all one guy. One set of sock puppets pretends to hate him, the other sock puppets support him, it's all just a pathetic ruse to get attention.

Just remember, if you see someone dissing on twitter, that is still twitter, trying to drum up more attention. Ignore him and maybe he'll get bored and go away.

And that's all I'll say on the matter because I don't want to get any closer to this one man cluster fuck.

Re:Oh, this is a fine how do you do (1)

gardyloo (512791) | more than 6 years ago | (#23785735)

Just remember, if you see someone dissing on twitter, that is still twitter, trying to drum up more attention. Ignore him and maybe he'll get bored and go away.
Mod parent up!

      Heeeyyyy.... wait.....

Re:New goal... (5, Interesting)

spun (1352) | more than 6 years ago | (#23785307)

Something like a Mantis Shrimp? [wikipedia.org] Some species can detect circularly polarized light; each stalk mounted eye, on its own, has trinocular vision; they have up to sixteen different types of photoreceptors (not counting the many separate color filters they also have) to our four; and the information is transmitted from the retina in parallel, not serially down a single optic nerve like ours.

These are also the little dudes who can strike with the force of a .22 caliber bullet, fast enough to cause cavitation and sonoluminescence.

Go Super Shrimp!

Re:New goal... (4, Insightful)

CodeBuster (516420) | more than 6 years ago | (#23786183)

You do realize that such an ocular system, which undoubtedly works well for the limited needs of the shrimp, may have accompanying disadvantages for complex land based life forms such as humans. The human vision system while not optimized for certain specialized uses, such as the aforementioned shrimp, is never the less a very decent general purpose system that has served our species well for eons. It is likely that our current system of vision, especially when compared to the possible trade-offs for increased capabilities (less general intelligence capabilities as more of the brain and nervous system is devoted to complex autonomous image processing for example), is fairly close to optimal given the other constraints of our bodies. Besides, for those situations where a particular aptitude is useful but not always desirable, night vision for example, human intelligence has allowed us to construct external enhancement devices that we can turn on or off at will. Animals which have developed night vision naturally as part of a nocturnal lifestyle cannot turn that feature on or off at will and thus are at a disadvantage during the daytime whereas humans are more generally adaptable. It is fairly clear that innate intelligence is among the very best, if not the best, of the natural abilities that have developed under evolutionary pressure. How else to explain why humans have dominated the earth and essentially escaped the natural system that once controlled them?

Re:New goal... (5, Funny)

spun (1352) | more than 6 years ago | (#23786343)

Dude, calm down. I wasn't dissing humanity, by mentioning that mantis shrimp have better vision, okay?

"Hew-mans! Hew-mans! Hew-mans! we're number one! we're number one!"

Feel better now?

Re:New goal... (2, Informative)

Illserve (56215) | more than 6 years ago | (#23786197)

and the information is transmitted from the retina in parallel, not serially down a single optic nerve like ours.

Nope, not true. Practically everything our brain does is parallel, and this is definitely true of the optic nerve.

It's certainly a major bottleneck in the system; a lot of compression gets down by the retina before it is transmitted but that's because the optic nerve is long and has to move with the eyeball.

Yes, I think any mantis shrimp capable of self reflection would consider the human eye an upgrade (except for the fact that its too big for the little buggers to swim with)

Re:New goal... (4, Funny)

spun (1352) | more than 6 years ago | (#23786437)

Christ on a fucking pogo stick, another one? What's with people who can't admit that maybe, just maybe, humans aren't the best at everything?

Mantis shrimp don't have a blind spot, because their eyes aren't like the stupid human eyes where the optic nerve attaches to the front! Nyah nyah nyah!

Here's the quote I was referring too:

The visual information leaving the retina seems to be processed into numerous parallel data streams leading into the central nervous system, greatly reducing the analytical requirements at higher levels.
As far as I know, there is only a single data stream per eye in human vision. It may be transmitted in parallel, but there is only one image created for each eye. Not so for the vastly superior mantis shrimp. We have trinocular vision in each eye, so suck it, monkey boy!

I wouldn't, I mean, a mantis shrimp would never consider trading my, I mean his superior eyes for your puny human ones!

Re:New goal... (3, Informative)

mikael (484) | more than 6 years ago | (#23786257)

It's amazing the variations on mammalian visions - some animals still have four different color receptors (the normal red, green, blue and with the extra one which sees into the ultra-violet range of the electromagnetic spectrum). Insects are able see into the UV range as well as being able to detect polarization of sunlight).

Liveleak has a video of the Snapping shrimp [liveleak.com]

Obligatory.... (0)

Anonymous Coward | more than 6 years ago | (#23785149)

But can it run linux?

Re:Obligatory.... (0, Offtopic)

justdrew (706141) | more than 6 years ago | (#23785237)

no the obligatory is: imagine a beowulf cluster of Roadrunners!

Re:Obligatory.... (1)

lawn.ninja (1125909) | more than 6 years ago | (#23785729)

Now I know why they included that line in the LANL press release. To Kill a Slashdotter's Linux Jokes. Which also just so happens to be the title of my next novel.

Quote from article.

"Roadrunner was built using commercially available hardware, including aspects of commercial game console technologies. Roadrunner has a unique hybrid design comprised of nodes containing two AMD OpteronTM dual-core processors plus four PowerXCell 8iTM processors used as computational accelerators. The accelerators are a special IBM-developed variant of the Cell processors used in the Sony PlayStation® 3. Roadrunner uses a Linux operating system. The project's total cost is approximately $120 million."

Interesting pictures, but... (3, Funny)

RingDev (879105) | more than 6 years ago | (#23785157)

Who the hell left colored drop lights laying all over the server room!?

-Rick

Re:Interesting pictures, but... (1)

ckthorp (1255134) | more than 6 years ago | (#23785715)

Those colored-light "high tech" photos need to go away. They were cool for about 5 years. The sooner the photographers realize they're passe, the better.

Ghost in the supercomputer (2, Interesting)

Citizen of Earth (569446) | more than 6 years ago | (#23785183)

How long until we can simulate the entire brain?

And when this simulation claims to be conscious, what do we make of that?

Re:Ghost in the supercomputer (1)

AragornSonOfArathorn (454526) | more than 6 years ago | (#23785213)

How long until we can simulate the entire brain?

And when this simulation claims to be conscious, what do we make of that?

Ask it how it feels.

Re:Ghost in the supercomputer (2, Funny)

sgt scrub (869860) | more than 6 years ago | (#23785605)

ask if it likes boobies.

Re:Ghost in the supercomputer (1)

Arethan (223197) | more than 6 years ago | (#23785853)

Ask it what it feels like when you shut off various parts of it. >:)

Re:Ghost in the supercomputer (5, Funny)

owlnation (858981) | more than 6 years ago | (#23785229)

And when this simulation claims to be conscious, what do we make of that?
Simple. We make whatever it tells us to make. Or else.

Re:Ghost in the supercomputer (1)

Zekasu (1059298) | more than 6 years ago | (#23785285)

We ask it if it's really conscious and see if consciousness is really just in the brain.

Re:Ghost in the supercomputer (1)

RockoTDF (1042780) | more than 6 years ago | (#23785821)

Don't worry too much about that one. Consciousness is far more complex than being able to emulate human cognition. Currently Beta waves in the brain are a hot topic and a computer emulating the human brain wouldn't necessarily have "computer waves" since it would be a digital system pretending to be analog.

Re:Ghost in the supercomputer (1)

RockoTDF (1042780) | more than 6 years ago | (#23785957)

I meant gamma waves, sorry

I'll take 2 (0)

Anonymous Coward | more than 6 years ago | (#23785191)

IBM takes credit right?

How Many Years? (1)

Zekasu (1059298) | more than 6 years ago | (#23785193)

This is actually pretty neat. Imagine having to lug around IBM's Roadrunner on your back in order to see!

But with (most) sarcasm aside, the applications for this could be useful. In the distant day when supercomputers become the size of a penny, this could replace people's vision, or even possibly add eyes in the back of one's head. (Although, I'm not sure I would welcome something "imprecise" which may be grone to plitches.)

Also, Skynet has to see somehow, right?

Re:How Many Years? (1)

owlnation (858981) | more than 6 years ago | (#23785281)

this could replace people's vision, or even possibly add eyes in the back of one's head.
hmmm... funny, when I think of where I could I possibly have an extra pair of eyes, the back of my head isn't the first place I think of...

Re:How Many Years? (1)

neokushan (932374) | more than 6 years ago | (#23785395)

I thought the purpose of the experiment was to see if the computer could recognise and interpret the images, as opposed to just being able to generate them?

Oh, for a second there.... (2, Funny)

Anonymous Coward | more than 6 years ago | (#23785209)

I thought that supermodels stimulate the human visual system.

Re:Oh, for a second there.... (0)

Anonymous Coward | more than 6 years ago | (#23785769)

No, they stimulate the reproductive system.

Running Linux in my head (1)

suck_burners_rice (1258684) | more than 6 years ago | (#23785233)

That's nothing. I'm already running a port of Linux in my brain, and cross-compiling for it on a program I wrote called VMbrain, which is able to run the same code as my brain.

Re:Running Linux in my head (1)

wikes82 (940042) | more than 6 years ago | (#23785381)

I've been running DOS for years.. I need to upgrade, so I can multitask

Re:Running Linux in my head (1)

4D6963 (933028) | more than 6 years ago | (#23785559)

That's nothing. I'm already running a port of Linux in my brain, and cross-compiling for it on a program I wrote called VMbrain, which is able to run the same code as my brain.

So, can you kill yourself by running `rm -rf /` or does it only turn you into a 'vegetable'? Oh and more importantly, when you want to communicate with another computing device, do you plug your 'cable' in or is it the other way around?

All things considered I don't think I want to hear the answer to this question.

The Last Step For Ubiquitous Robotics? (5, Interesting)

TheLazySci-FiAuthor (1089561) | more than 6 years ago | (#23785241)

Visual object recognition systems have been a thorn in the side of robotics since the beginning. The other annoynace of battery power will likely be solved by the nanowire battery - therefore leaving 'sight' as the real final technological step for our lovely robots.

Extrapolating further, a human-quality object recognition system will yield results which we cannot currently imagine (let's avoid some big-brother robot talk for a second, however).

For example; I was looking at some old WWII photographs of troops getting on boat - thousands of faces in these very high-quality photographs. To myself, I thought,'Self. If all historical photographs could be placed in view of a recognition system, perhaps it could be found, interestingly, where certain ancestors of ours did appear.'

Throw in a dash of human-style creativity and reasoning and I'm certain some truly nifty revelations are to be found in our mountains of visual documentation currently lamenting in countless vast archives.

Re:The Last Step For Ubiquitous Robotics? (2, Funny)

notgm (1069012) | more than 6 years ago | (#23785509)

output:

why does christopher lambert show up in all of these historical pictures?

But can it (1)

joeflies (529536) | more than 6 years ago | (#23785271)

get 60fps in Crysis?

Re:But can it (1)

doyoulikeworms (1094003) | more than 6 years ago | (#23785999)

It can render it at 60fps and drool at the graphics at the same time.

The Singluarity is Near (4, Insightful)

Richard.Tao (1150683) | more than 6 years ago | (#23785273)

It's nice to see progress is being made. It's scary how accurate Ray Kurzwiel's predictions seem to be, he said that by early 2010 we'll have simulated a human brain. (he's a technological analyst and author of "The Singularity is Near"). Todays desktops are faster then the super computers of the 90's. I can't wait till I'm able to get a laptop smarter then me in every way (queue joke about how stupid I am), that'll be a cool time to live in. Seems it's only a matter of decades away. Probably 20 years.

Re:The Singluarity is Near (5, Insightful)

4D6963 (933028) | more than 6 years ago | (#23785439)

It's nice to see progress is being made. It's scary how accurate Ray Kurzwiel's predictions seem to be, he said that by early 2010 we'll have simulated a human brain. (he's a technological analyst and author of "The Singularity is Near"). Todays desktops are faster then the super computers of the 90's. I can't wait till I'm able to get a laptop smarter then me in every way (queue joke about how stupid I am), that'll be a cool time to live in. Seems it's only a matter of decades away. Probably 20 years.

OMG a super computer! It's so powerful it can probably pop up a consciousness of its own!

Sarcasm aside, computer power and strong AI are two very distinct problems. Computer power is all about scaling up power so you can do more in less time, that doesn't allow you to do anything new, only the same things except faster. Strong AI is all about algorithms, and nobody can tell if such algorithms exist. And anyone who talks about human-like strong AI is a crackpot (Kurzwiel is a crackpot to me for his wacky predictions), as we have yet to see a bug-like strong AI, and if it was just a problem of power we'd already have something working in that field.

Re:The Singluarity is Near (1)

Richard.Tao (1150683) | more than 6 years ago | (#23785721)

I wasn't saying that. Yes raw power doesn't equate to random sentience popping out of thin air. But if you can simulate a neuron, or a group of neurons, or a region of neurons... Or, hey, trillions of neurons, then you can simulate the human brain if you have the neurons down right and organized in a correct manner. Which we are learning more about every day. If computational power is increasing at a exponential rate, and shows that this will be possible in a number of years, then it seems like a reasonable assumption.

Re:The Singluarity is Near (1)

mevets (322601) | more than 6 years ago | (#23785899)

No. This idea rests on way too much faith about how a brain works. Just because I have all the ingredients in a pot doesn't mean beef bourguignon will result from applying heat. I have to have real knowledge on how it works. Even if you could magically generate every possible substance than came from these ingredients, you would still need a way to select which one is beef bourguignon.
The kicker is, even when you do this the one that generated it isn't necessarily a chef. It might make a mess of salmon almondine.
AI as a 'simulated human brain' is snake oil. Wasn't there an article earlier about naivety vs skepticism?

Re:The Singluarity is Near (1)

Richard.Tao (1150683) | more than 6 years ago | (#23785975)

I think I'm going to cry.

Re:The Singluarity is Near (1)

4D6963 (933028) | more than 6 years ago | (#23786439)

I wasn't saying that. Yes raw power doesn't equate to random sentience popping out of thin air. But if you can simulate a neuron, or a group of neurons, or a region of neurons... Or, hey, trillions of neurons, then you can simulate the human brain if you have the neurons down right and organized in a correct manner. Which we are learning more about every day. If computational power is increasing at a exponential rate, and shows that this will be possible in a number of years, then it seems like a reasonable assumption.

Yeah, because all of us non-brain scientists here on Slashdot know that the brain is just a bunch neurons connected together and nothing else.

May I highlight the recently discovered role of astrocytes [wikipedia.org] in the brain?

Re:The Singluarity is Near (1, Insightful)

Anonymous Coward | more than 6 years ago | (#23786087)

Strong AI is all about algorithms, and nobody can tell if such algorithms exist.

There are around six billion instances of such algorithms in production today. We know they exist.

And anyone who talks about human-like strong AI is a crackpot

Okay, you're a crackpot.

Re:The Singluarity is Near (1)

4D6963 (933028) | more than 6 years ago | (#23786399)

There are around six billion instances of such algorithms in production today. We know they exist.

I've got some news for you, the A in 'AI' stands for 'Artificial' [wiktionary.org] .

Burn.

Re:The Singluarity is Near (2, Interesting)

alexborges (313924) | more than 6 years ago | (#23786213)

Has it occured to you to actually read Kurzwiel? Why do you think its positive to label someone a "crackpot" when he is looking into some possibilities for our evolution.

More on that: how in the hell are we to keep evolving if not through technology? We wont evolve "naturally", i think thats well established, not anymore. Our social system (for ALL of us) has not erm... evolved to be a good evolutive system that rewards the best.

The only way "up" is through a technologicall singularity. I dont think its inevitable though, i think its necessary, desirable.

Re:The Singluarity is Near (0)

Anonymous Coward | more than 6 years ago | (#23785527)

that'll be a cool time to live in.
How? At that point, human labor will almost certainly have become unnecessary, but at the same time the social environment will still be one where you have or have not, not one of plenty, because the other resources are limited. When the rich don't need you anymore, what are they going to do?

On an aside, if we're going to have the power to simulate a brain, I truly hope that self-awareness emerges from the simulation, because the only thing worse than a simulated person would be a machine with more power than us but no consciousness and thus no way to have a bad conscience.

Re:The Singluarity is Near (1)

BronsCon (927697) | more than 6 years ago | (#23786169)

When the rich (few) don't need the poor (many) anymore, you can expect the few be be faced with having to defend themselves from the many. I don't know about you, but I can get pretty fierce.

Re:The Singluarity is Near (0)

Anonymous Coward | more than 6 years ago | (#23786225)

Assuming that they will let it come to an open confrontation, how fierce do you think you can be against an army of artificial soldiers which are smarter than you?

Re:The Singluarity is Near (1)

BronsCon (927697) | more than 6 years ago | (#23786301)

How many do you think I'll let get built before I start fighting?

Re:The Singluarity is Near (0)

Anonymous Coward | more than 6 years ago | (#23786441)

Enough. You'll probably build them, too.

The Singularity is degrading my self-esteem (1)

azzuth (1177007) | more than 6 years ago | (#23785771)

I can't wait till my wife AND my computer can call me a dumbass simultaneously

Too Optimistic (2, Interesting)

raftpeople (844215) | more than 6 years ago | (#23786091)

Based on reasonable extrapolations of the rate of hardware advance, we won't be able to simulate a human brain in real time until sometime in the 2020's.

However, that is based on the previously incorrect assumption that neurons are the only kind of brain matter that is important. Now it is clear that glial cells play an important role in coordinating cognition. There are 10 times as many glial cells as there are neurons. That sets our simulation back a few years.

I think Ray Kurzwiel is way, way, too optimistic regarding the rate of progress.

Good luck with that. (1)

4D6963 (933028) | more than 6 years ago | (#23785277)

After examining the results, the researchers 'believe they can study in real time the entire human visual cortex.'

I'll believe it when I'll see it. With my own eyes that is.

How long until we can simulate the entire brain?

How does 'never' sound? But more seriously you'd need to have an intricate understanding of its inner workings, besides the fact that it involves creating a strong AI which feasibility even in the distant future falls within the realm of wild speculation.

Re:Good luck with that. (1)

mpeskett (1221084) | more than 6 years ago | (#23785859)

I suppose there's always the point at which pure brute computing power is able to simulate down to the atomic level and build up from there.

Intelligence has come about that way once already, don't see why it shouldn't happen again (although it would be a lot quicker to have some abstraction and algorithms instead of waiting for intelligent life to evolve within a simulated universe).

Vatanen's Peak (2, Insightful)

mangu (126918) | more than 6 years ago | (#23786095)

How does 'never' sound?

It's funny that if you claim a mountain is impossible to climb they'll name it after you [wikipedia.org] . But try going up that same mountain in ten minutes. [youtube.com] Will they rename it after you? No way...


It's true that we don't know how the human brain works, yet, because we don't have all the needed tools to study it today. A caveman would never be able to understand the workings of a watch, you cannot study a watch stone tools. But each time a supercomputer beats a record we get a better tool to study the inner workings of the human brain.

....'cause there's people waiting for this! (1)

Kid Zero (4866) | more than 6 years ago | (#23785303)

'How long until we can simulate the entire brain?"

Lord knows we've got a planetful of nitwits to help out somehow. Just build a massive mesh network and several of these and we could raise the World IQ by a couple of points, at least!

Yeah, I'm bored.

Let's link thousands of these simulations together (3, Funny)

Daimanta (1140543) | more than 6 years ago | (#23785329)

And we should call it Skeyenet.

Re:Let's link thousands of these simulations toget (1)

squiggly12 (1298191) | more than 6 years ago | (#23785607)

What are you doing Dave?..... Dave?

The hardware is apparently there (4, Interesting)

overtly_demure (1024363) | more than 6 years ago | (#23785359)

There are roughly 10^15 synapses in a human brain. If you place 10 Gb of RAM (10^10 bytes) on a 64 bit multicore computer and simulated neuronal activation levels with a one-byte value, it would take a 100,000 such computers (10^10 * 10^5 = 10^15) to pretend they have roughly the synaptic simulation power of a human brain. It is apparently now feasible, at least in principle.

We are ignoring for the moment how the neural network simulators work, how they communicate amongst themselves, how they are partitioned, what sensor inputs they receive, how they are trained (that's a tough one), etc. This will turn out to be extraordinarily difficult unless some very clever people mimic nature in very clever ways.

Well, at least the hardware is there.

Re:The hardware is apparently there (0)

Anonymous Coward | more than 6 years ago | (#23785539)

neuronal activation levels
... ...can't be emulated in 8 bits. Signals are modulated in all available dimensions including amplitude, frequency and rate of change. 8 bits is far to small to emulate the state of a neuron.

On the other hand, the signaling speed of a neuron is rather slow (milliseconds.)

Re:The hardware is apparently there (2, Interesting)

overtly_demure (1024363) | more than 6 years ago | (#23785683)

You are mistaken. Most neurons emit a variable frequency of relatively stereotypical voltage spikes, and it is not a crippling first approximation to assume that all of them do. The minimum interval is about 1 ms. In any case, bump the RAM up to 20 Gb and simulate the frequencies in 16 bits. A factor of two error in RAM is just monetary cost, it is not insurmountable.

The 1 ms minimum re-activation interval is interesting, because given enough CPU cores per RAM bank, the speed of the computer may surpass that of the biological brain.

Famous last (credible) words. (0)

Anonymous Coward | more than 6 years ago | (#23785365)

it's quite easy to simulate large chunks of [our brain] on supercomputers

Why supercomputers? (2, Interesting)

nurb432 (527695) | more than 6 years ago | (#23785473)

Why not just setup another 'distributed' project where we all donate cycles and simulate the brain?

Should be enough of us out here i would think.

If it's built by humans... (1)

salparadyse (723684) | more than 6 years ago | (#23785479)

...and then of course I have this terrible pain in the (diodes/insert name of groovy new technology) all down my left hand side.

When it boots up (0)

Anonymous Coward | more than 6 years ago | (#23785551)

does it go "Beep-Beep?"

Brain == Google? (1)

Ndymium (1282596) | more than 6 years ago | (#23785577)

Because our brain is massively parallel, with a relatively small amount of communication over long distances, and is made of unreliable, imprecise components...
Wouldn't that be... Google [wikipedia.org] ?

made by aliens (1)

sgt scrub (869860) | more than 6 years ago | (#23785583)

Unless it naturally focuses on boobies when a woman enters the room then it was made by aliens.

Re:made by aliens (1)

Spatial (1235392) | more than 6 years ago | (#23785713)

I knew there was something odd about those gay guys...

i think it's sexy. (1)

unspokenchaos (1295553) | more than 6 years ago | (#23785609)

running on linux awesome.

Simulate is the operative word (2, Informative)

HuguesT (84078) | more than 6 years ago | (#23785625)

From TFA it's not very clear what this simulation achieved. It was code that already existed and as far as I understand it, it was used to validate some simulation models of low-level biological vision.

However his simulation did not necessarily achieve computer vision in the usual sense, i.e: shape recognition, image segmentation, 3D vision, etc. This is the more cognitive aspect of the visual processus, which at present requires a much higher level of understanding of the vision process that we do not posess.

FYI the whole brain has already been simulated, see the work of Dr izhikevich [nsi.edu] . It took several months to simulate about 1 second of brain activity.

However this experiment did not simulate thought, just vast amounts of simulated neurons firing together. The simulated brain exhibited large-scale electrical behaviours of the type seen in EEG plots, but this is about it.

This experiment sounds very similar. I'm not all that excited yet.

What it is?? (1)

adityamalik (997063) | more than 6 years ago | (#23785643)

I read TFA, and it doesn't really talk too much about exactly what the simulation achieves or could be used for. They talk of 'danger recognition' - surely that's not possible without a simulation of the rest of the brain as well? To make the logical connections between what the eyes see and what the implication is in terms of 'danger'? And they mention cars that could drive themselves. Can someone in the know explain if this is really a big deal - or just a really really great picture matching program?

"interpretation" at what level? (4, Interesting)

electric joy boy (772605) | more than 6 years ago | (#23785645)

"aiming to produce a machine that can see and interpret as well as a human."

First I want to say that this whole level of brain modeling is really cool. However, there are, of course, different levels of "interpretation" I don't think that this computer will be able to achieve a human level of interpretation simply by modeling the visual cortex.

  1. perception: at one level you could argue (not very effectively) that interpretation just means perception... that's an eyeball/optic nerve visual cortex thing. e.g. You can perceive a face.
  2. recognition/categorization: of visual forms involves the visual cortex/occipital lobe. e.g. you can recognize if that face is familiar
  3. interpretation: involves assigning meaning to a stimulus and this involves many more parts of the brain than the visual cortex. It's obviously tied to memory which is closely tied, physiologically, to emotion. It also involves higher order thinking since, when most humans interpret a real world stimulus, there are multiple overlapping and networked associations that must be processed into a meaningful whole. e.g. you can recognize how threatening that face is, why it is threatening or not (and in what substantive domains it is or is not threatening), and even what you should do about it.

Even "interpretation" at the second level above (which it seems the "roadrunner" might be able to model) require a lot more, for humans, than just the visual cortex.

In other words if we were to call into existence a floating occipital lobe connected to a couple of eyes that had never been attached to the rest of a brain we would never be able to achieve recognition/categorization let alone interpretation. If I'm wrong maybe some of you hardcore neuroscience type can help me out?

Vision vs. Perception (1)

gmuslera (3436) | more than 6 years ago | (#23785659)

See as human may or may not be easy... but interpret as human could be a bit more complicated. Could recognize patterns, detect movements, and more things that we take as normal without thinking too much on them, ok, but things are a bit more complex than that. As the brain is not so fast processing visual info, somewhat we anticipate the future [nytimes.com] in our perceptions. That is the base of most optical illusions.
Could be useful to simulate such things, based on our limitations? Will that computer be fooled by "normal" optical illusions? In some sense, i think that yes, will be useful. If it dont perceive the world the same as us, high level communication with humans could be harder or have a lot of misunderstanding.

Simulating the Human Brain? (1)

sehlat (180760) | more than 6 years ago | (#23785677)

Been done for a very long time. See Politician [wikipedia.org] .

Niagara Falls (1)

camperdave (969942) | more than 6 years ago | (#23785687)

Back in times of yore, when the Beach Boys were young, and Get Smart was a TV show, they used to say that a computer powerful enough to simulate the human brain would require all the electricity generated by Niagara Falls to power it, and all the water going over the Falls to cool it. So far, as our understanding of the brain's complexity grows, that estimate still remains.

Re:Niagara Falls (1)

mpeskett (1221084) | more than 6 years ago | (#23785919)

So when does the work start on building this thing?

Fuck the touristy stuff around the waterfall, I want an AI damnit!

Interpret as well as a human? (1)

MillenneumMan (932804) | more than 6 years ago | (#23785727)

I wonder if their system would be able to interpret captchas successfully?

If it's so easy to simulate (1)

Buelldozer (713671) | more than 6 years ago | (#23785753)

Per the summary, then why does it take 100,000+ cores and the worlds first petaflop supercomputer to do it?

What happens when you input porn? (1)

ksd1337 (1029386) | more than 6 years ago | (#23785765)

I think if you input porn into this system, it will either hang or melt like a Slashdotted server.

Machine Consciousness (2)

RockoTDF (1042780) | more than 6 years ago | (#23785933)

Machine consciousness is not something that will likely happen in our lifetime. We don't even know exactly what it is in humans, much less a machine. Neuroscience is further ahead on consciousness issues than computer science, and even they haven't turned up a great deal yet. Computer scientists and physicists haven't got a clue about this, and sometimes their drivel about consciousness and human cognition is just embarrassing to them.

Re:Machine Consciousness (1)

KasperMeerts (1305097) | more than 6 years ago | (#23786071)

Still, I hope they never get conscious. Not for us, but for them. Switching them off would practically be murder.

Re:Machine Consciousness (0)

Anonymous Coward | more than 6 years ago | (#23786185)

Why would switching it off be murder? Wouldn't it just be like putting it to sleep?

We kill billions of concious animals for food, why would killing an artificial intelligence be any worse?

Don't hold your breath (4, Interesting)

videoBuff (1043512) | more than 6 years ago | (#23786125)

Human vision and associated perception has confounded AI folks right from the beginning.

After examining the results, the researchers 'believe they can study in real time the entire human visual cortex.' How long until we can simulate the entire brain?"

There are researches who believe that humans use their whole brain to "see." If that is true, the claims of these researchers are highly premature with respect to vision. Everything from stored patterns to extrapolation is used to determine what we see. Even familiarity is used in perception - that is why there is this urban myth that "foreign" people look the same. If one were to ask those foreigners, they will say all indigenous people are totally different.

brain simulation? (0)

Anonymous Coward | more than 6 years ago | (#23786177)

why are we talking about simulating a brain?
sure, it will be a super-fantastic milestone in the world of computing.
but a brain is not perfect, its flawed.

Roadrunner runs the web server too... (1)

CodeBuster (516420) | more than 6 years ago | (#23786255)

How else to explain why they have not already been Slashdotted?

These programs run without meaningful data (3, Informative)

Anonymous Coward | more than 6 years ago | (#23786473)

I admit I didn't RTFA - but that sort of report cropping up in different places is really quite misleading in principle. While it may be true that the processing power exists to simulate networks on the scale of small parts of the brain in real time, the biological data to work on simply _does not exist_. The situation is somewhat better for the retina than for other parts of the nervous system, but seriously: Nobody knows the topology of neural networks in our brain to the level of detail required for simulations that would somehow reflect the real world situation. Think about it: A neuron is small, just several micrometers in diameter and it can form appendages of several centimeters (within the brain) in length that can connect it to several thousands of other neurons. The technology to map that kind of structure simply does not exist. It _is_ being developed, but there is nowhere near to enough data to justify calling the programs these computers run "simulations of the human brain".
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?