Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Scientists to Build 'Brain Box'

ScuttleMonkey posted more than 8 years ago | from the steel-matter dept.

187

lee1 writes "Researchers at the University of Manchester are constructing a 'brain box' using large numbers of microprocessors to model the way networks of neurons interact. They hope to learn how to engineer fail-safe electronics. Professor Steve Furber, of the university school of computer science, hopes that biology will teach them how to build computer systems. He said: 'Our brains keep working despite frequent failures of their component neurons, and this "fault-tolerant" characteristic is of great interest to engineers who wish to make computers more reliable. [...] Our aim is to use the computer to understand better how the brain works [...] and to see if biology can help us see how to build computer systems that continue functioning despite component failures.'"

Sorry! There are no comments related to the filter you selected.

Fuber? (2, Funny)

alphax45 (675119) | more than 8 years ago | (#15740849)

Anyone else read that as Fubar and think "this is not going to be good"!

Re:Fuber? (1)

yincrash (854885) | more than 8 years ago | (#15740914)

nope!

Re:Fuber? (0)

Anonymous Coward | more than 8 years ago | (#15741058)

Give me a grant, and Ill do it.

GNAA (-1, Troll)

CancerAidsInfluenza (989657) | more than 8 years ago | (#15740851)

GNAA (GAY NIGGER ASSOCIATION OF AMERICA) is the first organization which gathers GAY NIGGERS from all over America and abroad for one common goal - being GAY NIGGERS. Are you GAY? Are you a NIGGER? Are you a GAY NIGGER? If you answered "Yes" to all of the above questions, then GNAA (GAY NIGGER ASSOCIATION OF AMERICA) might be exactly what you've been looking for! Join GNAA (GAY NIGGER ASSOCIATION OF AMERICA) today, and enjoy all the benefits of being a full-time GNAA member. GNAA (GAY NIGGER ASSOCIATION OF AMERICA) is the fastest-growing GAY NIGGER community with THOUSANDS of members all over United States of America and the World! You, too, can be a part of GNAA if you join today! Why not? It's quick and easy - only 3 simple steps! * First, you have to obtain a copy of GAYNIGGERS FROM OUTER SPACE THE MOVIE and watch it. You can download the movie (~130mb) using BitTorrent. * Second, you need to succeed in posting a GNAA First Post on slashdot.org, a popular "news for trolls" website. * Third, you need to join the official GNAA irc channel #GNAA on irc.gnaa.us, and apply for membership. Talk to one of the ops or any of the other members in the channel to sign up today! Upon submitting your application, you will be required to submit links to your successful First Post, and you will be tested on your knowledge of GAYNIGGERS FROM OUTER SPACE. If you are having trouble locating #GNAA, the official GAY NIGGER ASSOCIATION OF AMERICA irc channel, you might be on a wrong irc network. The correct network is NiggerNET, and you can connect to irc.gnaa.us as our official server. Follow this link if you are using an irc client such as mIRC. If you have mod points and would like to support GNAA, please moderate this post up. .________________________________________________. | ______________________________________._a,____ | Press contact: | _______a_._______a_______aj#0s_____aWY!400.___ | Gary Niger | __ad#7!!*P____a.d#0a____#!-_#0i___.#!__W#0#___ | gary_niger@gnaa.us | _j#'_.00#,___4#dP_"#,__j#,__0#Wi___*00P!_"#L,_ | GNAA Corporate Headquarters | _"#ga#9!01___"#01__40,_"4Lj#!_4#g_________"01_ | 143 Rolloffle Avenue | ________"#,___*@`__-N#____`___-!^_____________ | Tarzana, California 91356 | _________#1__________?________________________ | | _________j1___________________________________ | All other inquiries: | ____a,___jk_GAY_NIGGER_ASSOCIATION_OF_AMERICA_ | Enid Al-Punjabi | ____!4yaa#l___________________________________ | enid_al_punjabi@gnaa.us | ______-"!^____________________________________ | GNAA World Headquarters ` _______________________________________________' 160-0023 Japan Tokyo-to Shinjuku-ku Nishi-Shinjuku 3-20-2 Copyright (c) 2003-2005 Gay Nigger Association of America

Re:GNAA (0)

Anonymous Coward | more than 8 years ago | (#15740870)

Die.

Re:GNAA (0)

Anonymous Coward | more than 8 years ago | (#15740895)

hahaha
you fail twofold
kill yourself right now

Re:GNAA (1)

Roduku (950552) | more than 8 years ago | (#15741080)

Our brains keep working despite frequent failures of their component neurons

And this is proof

Testing for fault tolerance (4, Funny)

Freaky Spook (811861) | more than 8 years ago | (#15740853)

I wonder if they have any intention of getting these brain boxes drunk then get it to recite the ABC's?

Re:Testing for fault tolerance (1)

nolsen (518298) | more than 8 years ago | (#15740926)

Backwards?

Re:Testing for fault tolerance (1)

davidsyes (765062) | more than 8 years ago | (#15741211)

Would holes in the box lead to a real "brain drain"?

Could these brains be taught followance?

Fault tolerance with fuzzy logic already done (2, Informative)

EmbeddedJanitor (597831) | more than 8 years ago | (#15740959)

In an interesting experiment in the 80s, a controller based on fuzzy chips degraded gracefully.

The system was designed around a set of fuzzy computing boards. When one of the boards was removed, the control degraded, but still continued to function. Of course if some critical boards (eg direct attached to outputs) were removed, the system would fail immediately.

Re:Testing for fault tolerance (0)

Anonymous Coward | more than 8 years ago | (#15741037)

to see if biology can help us see how to build computer systems that continue functioning despite component failures.

It's called heart baby, and for some people - stuborness. They should just stop functioing ^^

Re:Testing for fault tolerance (2, Interesting)

QuantumFTL (197300) | more than 8 years ago | (#15741065)

I wonder if they have any intention of getting these brain boxes drunk then get it to recite the ABC's?

That's quite a funny post but it brings me to an (IMHO) interesting point - given a virtual "brain" capable of performing a certain task, can specifically targetting "damage" to the system result in creativity? Many of the most creative minds in our history got their inspiration in part due to mind-altering chemicals...

Re:Testing for fault tolerance (5, Interesting)

CroDragn (866826) | more than 8 years ago | (#15741260)

This has been done before, introducing a random element into the neural net. If done correctly, this can result in "creativity". Here [mindfully.org] is one link about it, seen it many other places too, so google for more.

Re:Testing for fault tolerance (2, Funny)

eonlabs (921625) | more than 8 years ago | (#15741091)

Wouldn't that require not supplying them alcohol until they form rust in the likeness of a five-o-clock shadow?

Pray to god that they fail. (1)

elucido (870205) | more than 8 years ago | (#15741140)

If this brain in a box is successful, humans will be worthless. How are we supposed to compete with machines that never get tired, never sleep, never eat, etc?

Re:Pray to god that they fail. (1)

davidsyes (765062) | more than 8 years ago | (#15741205)

We could learn how to spend that time learning how to dream of electric sheep that dream of electric humans that dream to learn....

Otherwise, losing out to the main brain would be all in vain.

(OK, that was Baaaaaahhhhdddd)

Re:Testing for fault tolerance (1)

davidsyes (765062) | more than 8 years ago | (#15741191)

Shit, for a moment I thought this was about harvesting the brains of organ donors.

BTW, what would be better: Series or Parallel links for the gray matter?

How would the "juices be kept flowing" in such an arrangement?

How would FLOPS of gray matter be calculated in a meaningful (err, umm, (thoughtful") way?

What happens if a dyslexic or autistic brain is linked in that collective?

What happens if a murderous or anorexic or bulimic brain or two are in the mix?

Copper top or zinc?

Plasma links or liquid crystalline entity links?

(hehehe, slash image word: "contents")

Two Separate Goals (3, Insightful)

Anonymous Coward | more than 8 years ago | (#15740858)

Continuing to function is one thing, but continuing to produce correct answers with high reliability is another. And under stress, I'd say biological brains aren't particularly good at any of this.

Re:Two Separate Goals (1)

grim4593 (947789) | more than 8 years ago | (#15740918)

But if there is a hardware failure you just have to wait awhile and the "bad neurons" will be bypassed. No more corrupt memory problems!

Re:Two Separate Goals (1)

s388 (910768) | more than 8 years ago | (#15741012)

human brains do have corrupt memory problems though.

pretty bad ones in my experience, and that i've heard about.

i think that in the neurological analog to the hardware failure, the bypasses won't properly occur.

"Our brains keep working despite frequent failures of their component neurons, and this "fault-tolerant" characteristic "

our brains keep working-- in the sense that they don't shut down, or explode, usually-- but they don't necessarily keep working WELL. i mean sheesh, even with some paltry uptime, like 15 or 24 hours and you start getting major crashes freezes and hangs.

Re:Two Separate Goals (1)

grim4593 (947789) | more than 8 years ago | (#15741036)

True enough... Someday we will be upgraded and our uptimes will surpass even linux!

Re:Two Separate Goals (2, Interesting)

Marcos Eliziario (969923) | more than 8 years ago | (#15741071)

The thing is that we are very resillient. Kill one transistor in Microprocessor and you're done. Compare that with people that lost some brain stuff in accidents and are still able to breath, walk, speak, and sometimes they even manage to rewire their brains to regain some lost functionalities. So, I don't agree when you say that human brains don't work very well under stress.

Man, meet your replacement. (1)

elucido (870205) | more than 8 years ago | (#15741151)

Exit Man, Enter Brainbox. This brainbox will ultimately reduce the value of human life, why? How many humans will we need once computers can do all the work and robots can be more productive than humans?

Don't tell me humans will be needed to program and repair because these self healing robots are being invented to prevent that.

I think its exciting (1)

gasmonso (929871) | more than 8 years ago | (#15740863)

Years from now when computers are 1000x faster and are our overlords, we can look back at this experiment... and say thanks a lot assholes! I kid, I kid.

http://religiousfreaks.com/ [religiousfreaks.com]

Re:I think its exciting (1)

bjackson1 (953136) | more than 8 years ago | (#15741128)

....and if Moore's law keeps going, that's only 15 years out!

That's a good thing. (1)

elucido (870205) | more than 8 years ago | (#15741192)

At least if computers are our overlords, we will still have jobs. If robots take over however, what do we need humans for?

Hardware? (2, Insightful)

CosmeticLobotamy (155360) | more than 8 years ago | (#15740869)

I don't mean to be one of those people that craps on a chunk of science without knowing exactly what's going on, but I would think there would be some large advantages to building the research version in software. There's less soldering when you realize it's not quite right.

Re:Hardware? (2, Interesting)

SnowZero (92219) | more than 8 years ago | (#15740952)

True, but the research grant requests can be much larger when you say you are going to do it in hardware :)

More realistically, perhaps they have already simulated some stuff and now want to scale it up drastically in size and speed. There isn't really enough detail in the article to tell how custom this is going to be. It could be anything from a Sun Niagara or a Connection Machine up to some custom designed parallel FPGA monster.

Re:Hardware? (-1, Offtopic)

QuantumG (50515) | more than 8 years ago | (#15741027)

I can't even get gcc's -fbounds-check to do anything. It's not rocket science and yet I still havn't seen a C compiler that does *basic* bounds checking. Example:
int main()
{
        char tmp[100];
        int n = 500;
        tmp[n] = 5;
        printf("here\n");
        return 0;
}
GCC happily compiles this code. It also happens to run just fine. Similarly if I write this code:
int main()
{
    int *p = NULL;
    if (p == NULL)
        *p = 1;
    return 0;
}
GCC happily compiles it without a single warning, and then the program promptly crashes when you run it. Is it too much to ask that GCC detect that on no path is p assigned a valid pointer. Is that really that hard? I know it aint! The fact that the compiler doesn't warn me that I have written == when I clearly ment != is just adding insult to injury.

As such, every time I see "research" into developing my fault tolerant hardware or software I have to scratch my head and wonder exactly when research ever gets turned into practice.

Re:Hardware? (1)

dollargonzo (519030) | more than 8 years ago | (#15741110)

i suggest you go check out the gcc bounds checking patch, which does, in fact, do perform bounds checking quite well. in the default compiler, -fbounds-check doesn't really do anything special, but -fbounds-checking in the modified compiler does quite a bit.
 

Re:Hardware? (1)

QuantumG (50515) | more than 8 years ago | (#15741137)

Lot of good that is. I also checked splint [splint.org] which fails to detect the out of bounds array reference in my first example but, thankfully, does detect that p has been assigned NULL and is then dereferenced. However this program:

void foo(char *p)
{
                if (p == NULL)
                                *p = '1';
}

int main()
{
                return 0;
}

Elicits no warnings. Which is just pathetic. Maybe this is something I can add, but I honestly thought splint was the shit.

Re:Hardware? (1)

dugjohnson (920519) | more than 8 years ago | (#15741175)

And your point is? It's obvious that the obvious things aren't getting caught, but if you are writing obviously bad code, there isn't a bounds checker in the world that will help you. One could write a bounds checker that would catch everything, but usually we have better things to do, and that bounds checking piece of code would be monstrously large.
If your point is that we can't even write a bounds checker, how can they work on something harder, I would respond that the bounds checker is there to help a decent programmer, not to do everything for the programmer. If you had another point, I missed it entirely.

Re:Hardware? (1)

QuantumG (50515) | more than 8 years ago | (#15741194)

if (a == NULL) *a = b; is a common bug. If it's in a part of code that isn't executed often then you might not even notice it. Tools should detect this stuff.

Re:Hardware? (1)

Cobralisk (666114) | more than 8 years ago | (#15741157)

C doesn't hold your hand. It allows you to do anything you could do in assembly language. That's the point of C. This includes setting arbitrary memory locations to arbitrary values. C works under the assumption that if you wrote it, you must have meant it. Why else would you have written it? Even more evil is the conditional statement if (p = NULL). One missing keystroke can lead to hours upon hours of debug time (we've all done it at least once). Don't like it? Put on a pink dress and go use Java.

Re:Hardware? (1)

QuantumG (50515) | more than 8 years ago | (#15741177)

Maybe they should call the GCC extensions that detect obvious errors the --pink-dress options.

Re:Hardware? (1)

Lisandro (799651) | more than 8 years ago | (#15741222)

Modpoints! My kingdom for modpoints!

Re:Hardware? (1)

Daverd (641119) | more than 8 years ago | (#15741139)

For initial versions, yeah it might make more sense to model things in software first. I think the whole point of this though is to build a computer where you could, for example, take a hammer to a part of it and the rest of it would keep on computing, although probably not nearly as well. More realistically I think they're concerned with individual hardware components dying. You can build a neural network in software all you like; if the power supply dies, so does your software.

Redundancy... (0)

Anonymous Coward | more than 8 years ago | (#15740881)

...is probably the answer they'll come up with. If computers have "large numbers of microprocessors" and software to route work past ones that have failed, it will be a long time (equivalent to, say, the age at which a human starts showing signs of senility) without maintenance before the system fails.

Redundent department of redundancy. (0, Redundant)

headkase (533448) | more than 8 years ago | (#15740886)

...this "fault-tolerant" characteristic is of great interest to engineers...

I believe it's called redundancy. Seriously.

Re:Redundent department of redundancy. (1)

Yahweh Doesn't Exist (906833) | more than 8 years ago | (#15740904)

redundancy doesn't scale well. what happens if your backup goes down? you need n+1 copies of the system to handle n faults, which either means that most of the time you're wasting n resources, or that when something does break you lose 1/(n+1) of your capacity.

I didn't RTFA but "educated sense" suggests to me the aim is to tolerate multiple faults without having large changes in capacity or wasting resources.

Re:Redundent department of redundancy. (1)

SpaceLifeForm (228190) | more than 8 years ago | (#15741123)

Like Tandem.

Re:Redundent department of redundancy. (3, Interesting)

lindseyp (988332) | more than 8 years ago | (#15740936)

Not only that, but hugely inefficient abstraction of the 'idea' from the level of the individual neuron. We're good at pattern recognition and conditioned response, but when it comes to doing calculations we're incredibly slow. Not to metnion inacurate. Would you like your computer to regularly 'make mistakes' ?

Re:Redundent department of redundancy. (1)

B3ryllium (571199) | more than 8 years ago | (#15740961)

Programmer: "What happened to Chechnya?"

Computer: "Oops."

They'll find out when they stop using Windows (2, Insightful)

guruevi (827432) | more than 8 years ago | (#15740900)

I don't know what level of redundancy they want, but if they have to build a brain box to figure that out:

There are a bunch of tools and specs out to get a fully (multiple) redundant system. You can have >1 server in any type of configuration, sharing any type of resource and when one fails, the other takes over, fully redundant.

Re:They'll find out when they stop using Windows (1)

grim4593 (947789) | more than 8 years ago | (#15740935)

I don't think redundancy is what they are aiming for. I think what they are trying to do is more like clustering. If one of the clustered components fails, then the rest of them even out the extra load.

My Brainbox (2, Interesting)

Doc Ruby (173196) | more than 8 years ago | (#15740903)

Large number of microprocessors? Why not a box stuffed with hundreds of millions of FPGA gates, configured into lots of multiply-accumulators (or embed lots of hardwired DSPs), interconnected across and between layers? That is how the brain actually works. Hook it up to cameras, mics and some rubber/piezo tentacles with pressure/heat sensors, leave it in the lab for a few months, and start asking it questions.

Sure... (-1)

Anonymous Coward | more than 8 years ago | (#15740929)

Sure, yeah, one of those! Why didn't I think of that! Better yet why didn't they? Hmmm... And while your at it, how about an elevator to the moon! Idiot.

Re:Sure... (0, Offtopic)

Doc Ruby (173196) | more than 8 years ago | (#15741034)

Fuck you, Anonymous idiot Coward. Your brain could be replaced by a rubberband and a propeller.

Re:Sure... (-1, Flamebait)

Anonymous Coward | more than 8 years ago | (#15741172)

Different AC here.

Why is it when I see someone with an anti-war / tree hugger / save the earth sig they all have so much pent up rage? Do you use so much energy riding your bike to work that you cannot get it up at night?

Re:Sure... (0)

Anonymous Coward | more than 8 years ago | (#15741179)

As long as that rubber band and propeller were attached to a G.I. Action Boat, I wouldn't mind.

Redundent analog. (0)

Anonymous Coward | more than 8 years ago | (#15740919)

"They hope to learn how to engineer fail-safe electronics. Professor Steve Furber, of the university school of computer science, hopes that biology will teach them how to build computer systems. He said: 'Our brains keep working despite frequent failures of their component neurons, and this "fault-tolerant" characteristic is of great interest to engineers who wish to make computers more reliable"

How about starting with the fact that it's analog.

Brain Box? (1)

Aerinoch (988588) | more than 8 years ago | (#15740925)

Was I the only one who thought that this story would be about devices used to control dinosaurs [wikipedia.org] ?

Re:Brain Box? (1)

TaggartAleslayer (840739) | more than 8 years ago | (#15740971)

Actually, yes. I'm sorry, man. I'm so, so, sorry.

Re:Brain Box? (1)

sabernet (751826) | more than 8 years ago | (#15741193)

I was thinking "Cyberbrain" [wikipedia.org]

And I am ashamed.

it took long enough (3, Insightful)

sepharious (900148) | more than 8 years ago | (#15740927)

who else besides me thinks this one should have been obvious from the getgo? it makes no sense to try and build a single processor that could function similarly to a brain. by utilizing mulitple processors you also have the option to design different types of processors to work together similar to the various types of neurons found in biological systems. this will hopefully be a huge step forward in developing possible AI systems.

Re:it took long enough (1)

qw0ntum (831414) | more than 8 years ago | (#15741040)

I just wanted to point out that the research is not geared toward developing AI technologies based on the structure of the brain. The research is using the brain as a model for more reliable systems, what with the brains ability to keep functioning despite damage to 'component neurons'.

Re:it took long enough (1)

sepharious (900148) | more than 8 years ago | (#15741226)

I know that they are not persuing AI as such but my point was developing massive parallel systems will have additional benefits in understanding possible alternative methods for AI creation. The ability to construct artifical brains will undoubtably be useful for such.

# of neurons needs to equal # of cpu's (2, Interesting)

rts008 (812749) | more than 8 years ago | (#15740930)

To actually model the human brain, I would think that the number of cpu's needed would impose a really large bus to interconnect, and then enabling each cpu to use memory chips (comparitivly to the human brain's ability) to be a little ahead of our current technology....otherwise AI solutions that actually worked would not be such a big problem, and would already be solved/utilized.
We have made big advances in this area, but having even a crude prototype to LT. Data ( Star Trek: Next Generation) is still quite a ways off.

However, I expect that we will eventually solve this problem. I just hope that we do in my lifetime- that would be way cool! (work fast, I'm 49!)

Re:# of neurons needs to equal # of cpu's (1)

JorDan Clock (664877) | more than 8 years ago | (#15741152)

The number of CPU's and neurons doesn't need to be equal. Since a neuron does very little "calculation" a single CPU (especially with multiple cores) can perform the job of many neurons. Of course, since the goal of this project is to replicate redundancy, the limit on the number of simulated neurons would be more of a choice by the experimenters and not a limit from the hardware.

Re:# of neurons needs to equal # of cpu's (1)

asuffield (111848) | more than 8 years ago | (#15741273)

That's not the really big problem with this approach. This is:

It takes about 15-20 years to train a human to the point of usefulness. The first couple of those years are spent cooing and drooling. An effective synthesis of the human brain in hardware would be expected to take about as long and about as much effort to train before becoming useful. Sure, at that point you could duplicate it relatively easily - but who is willing to spend years making baby noises into a microphone in the hope that *this* time the thing is going to work? You can't just run the hardware faster, because these things learn based on their input data and we can't synthesise parenting yet.

Faithful reproductions of the human brain structure are unlikely to generate results any time soon because of this. We have no idea whether it's possible to design the thing to grow up faster, nor sufficient understanding to make an educated guess. Trying to take shortcuts may work, or it may cause the thing to stop working entirely, and we're not even sure how to tell the difference between the two states in a reliable fashion.

It's entirely possible that we might someday solve this problem, but I don't *expect* it. My bet is that we crack the problem of how to upload an already-formed human mind first, and go from there.

Re:# of neurons needs to equal # of cpu's (0)

Anonymous Coward | more than 8 years ago | (#15741281)

But the mistake your making is that you'd even try to do a HUMAN brain first. If you could do the duplicate thingie, you'd probably model something much simpler. Like a dog. or a sheep. or a slashdot poster. (ah crap, that's me...)

Skynet (1)

arthurpaliden (939626) | more than 8 years ago | (#15740942)

So how long before it starts to think for itself.

Re:Skynet (1)

davidsyes (765062) | more than 8 years ago | (#15741229)

We'll know THAT when a reanimated enbalmed/entombed brainiac in the box is able to hurl chairs via telekinesis. Now THAT'S thinking inside and outside the box...

Braniac (I'm gonna fuckin' KILL the board of directors for putting my brain around these ex-plants....)

Motivation (0)

Anonymous Coward | more than 8 years ago | (#15740951)

There's interest for such systems in a) military, b) space exploration. Show some good ideas and you'll have nice funding or a new job.

Inter-neuron Communication (1, Funny)

Nom du Keyboard (633989) | more than 8 years ago | (#15740956)

Do human brain neurons communicate with each other using TCP/IP?

Re:Inter-neuron Communication (1)

Wescotte (732385) | more than 8 years ago | (#15741010)

Do human brain neurons communicate with each other using TCP/IP?

Nope, hence the phrase "In one ear and out the other" aka packet loss...

Reliability... (1)

xarium (608956) | more than 8 years ago | (#15740964)

While I agree that the human brain has many virtues of computing to teach us; lateral/creative thought, massively parallel processing etc. I have never counted "reliability" among them - it is an interesting concept.

OTOH, the failure rate at the end of the manufacturing process for CPUs is probably higher than the defect rate in human brains... err, I hope.

Re:Reliability... (2, Interesting)

cmaxwell (868018) | more than 8 years ago | (#15741155)

Amazing to think that the human brain is somehow a benchmark for reliability. "Our brains keep working despite frequent failures of their component neurons" - right, sometimes. As a neurology resident, I spend most of my time witnessing and trying to fix the failures... some of the craziest stuff you can imagine. The failures are spectacular - loss of memory, speech, understanding, motor function, balance, etc - sometimes predictible, often not. Between seizures, strokes, enecphalopahty, meningitis, hemorrhages, aneurysms, tumors, and whatever else you might come up with, it is amazing we live as long as we do. Hey, maybe there is something to that - I'm re-considering my original premise.

"Works"? I think they mean "Behaves" (1)

jpellino (202698) | more than 8 years ago | (#15740965)

We don't know how the brain works.
We know it's not a binary digital stored program computer.
They should have some success modeling how the brain behaves, though.
Maybe then they can contribute to the real question of how the mind works.
(Hey, wait a minute - this isnt thos two white mice again, izzit?)

their "brain box" (1)

Connie_Lingus (317691) | more than 8 years ago | (#15740966)

...sounds a lot like the web 2.0. Do I sense a conspiracy here? Quick, find Cheney!

Academia dupe? (4, Informative)

shib71 (927749) | more than 8 years ago | (#15740982)

Re:Academia dupe? (1)

isny (681711) | more than 8 years ago | (#15741168)

The article title sounds like something from the 1940s. "Scientists are working to create an electronic brain to aid in the war effort". Wait...that could be this year too.

Re:Academia dupe? (1)

shib71 (927749) | more than 8 years ago | (#15741200)

They'd have to keep it away from the White House though - if Bush came into contact with with a truly functional brain the universe would collapse.

Can't do it with microprocessors (1)

Pedrito (94783) | more than 8 years ago | (#15740983)

The brain is far more dynamic than any microprocessors. There's simply no way to reproduce that kind of fault tolerance without a living system. When parts of the brain are damaged, a few things happen. There may be enough redundancy that it simply continues to work. This is reproducible to some degree. Look at RAID. But when the brain fault tolerance isn't there, the only way for the brain to get back lost abilities is to start growing new neurons, making new axon connections and to build a new neural network (this ability tends to diminish quickly with age, however). If microprocessors fail, you can't just have a computer make new ones and rebuild new physical wirings, unless I've missed some really stunning breakthroughs in nanotechnology over the past few days. We're REALLY far away from doing anything that approaches what the brain does. Hell, we're really far away from being able to do what an arm can do. Sure, we can make one that bends and holds things and moves and even "feels" to some degree. But we sure as hell can't make one that you can break and it will heal itself.

Re:Can't do it with microprocessors (1)

IlliniECE (970260) | more than 8 years ago | (#15741180)

Well, this really works on two levels.. One on level is physical, and you're prolly right, self-healing semiconductors would be a tough nut... The other level is logical/architectural, and this *is* feasible.

Re:Can't do it with microprocessors (0)

Anonymous Coward | more than 8 years ago | (#15741266)

Wow, once again the Humble Slashdot poster knows more then those ignorant University professors. It's hard to say who is more stupid: the PHD at Manchester, a University with a long history of cutting edge work in computers, or the fools who gave him a million dollars. If only they had Asked Slashdot, they could have saved all that money, and perhaps bought consoles, games and pizza for everone on campus.

Where to start... First, this is press release, which means that it was written by an English major who may or may not be able to tell a computer from a microwave oven. This press release was given to journalism major, who may know how to use a computer (i.e. how to spell check), but knows as much about artifical intellegence research as they know about how string theory and gourmet French cooking are connected. And this all comes from a researcher who is desperately trying to simplify his project for the general public, in hopes his funding will not be cut. That's where the Humble Slashdot poster fits in, as a member of the general public. Who thinks that computers are boxes with Magic Smoke Inside. This is why press releases are the best place to get detailed information about technology research.

Now that we have solid information, let's examine the insightful analysis of the impossible technical problems with the project: Hell, we're really far away from being able to do what an arm can do. Sure, we can make one that bends and holds things and moves and even "feels" to some degree. But we sure as hell can't make one that you can break and it will heal itself. Hmmm, is this a description of a computer science project dealing with artifical intellegence? Does this address how to model neurons with microprocessors? There was also a reference to nanotechnology a bit earlier in the rant, and that doesn't seem to fit in with micros and AI. The objection to the project seems to be along the lines of: since I can't teleport to the moon to buy a six pack of Red Bull and not pay sales tax, lets stop wasting money on science and go back to steam engines.

Now I'm really going out on a limb here, but let me guess what this research could really be about, keeping in mind the press release issue. One could, if one was smart, make microprocessors emulate neurons. In fact, this has been done. One could then hook a bunch of them together and try to create a system that was similar to, or inspired by, biological systems. If one had such a system, they could then do experiments to see how this system responded to failures. These experiments could try and reproduce observed behavior in natural systems. This would, in fact, be what is called research: hypothesis, experiment, results. Now I have no idea what the real plan is, but by assuming that the people doing the work are responsible academics, I can imagine how this could be a good thing. Or I can be like the Humble Slashdot poster, and start insulting people and organizations I don't know anything about based on my deep insight and vast ignorance. Whos is in good company with the people who rated this comment at 2...

Re:Can't do it with microprocessors (1)

karlto (883425) | more than 8 years ago | (#15741277)

But when the brain fault tolerance isn't there, the only way for the brain to get back lost abilities is to start growing new neurons, making new axon connections and to build a new neural network (this ability tends to diminish quickly with age, however). If microprocessors fail, you can't just have a computer make new ones and rebuild new physical wirings, unless I've missed some really stunning breakthroughs in nanotechnology over the past few days.

Surely that depends on where the model starts and ends - if it continues to operate with faulty components and informs the right person (who fixes it), that meets your criteria, doesn't it?

Downside of biological computing (5, Insightful)

QuantumFTL (197300) | more than 8 years ago | (#15740999)

While I was an intern at the Jet Propulsion Laboratory, back when I was an undergraduate, I was very gung-ho about biologically inspired computing - I implemented an automatic flowchart positioning system using a genetic algorithm that would "evolve" a correct solution to the problem. While this certainly worked to some extent, the instability and sheer unpredictable nature of using such a stochastic algorithm made it impossible to use in a mission-critical setting. Many biologically inspired algorithms solve problems through methods that cannot be proven correct (unlike, say, the mathematics circuitry in a CPU), but merely empirically observed to "do a good job."

One of the main drawbacks of human engineering is the need for certainty, which often prohibits the use of many high-efficiency stochastic algorithms (especially for things like mesh communication) in conservative industries, like the US defense industry. This is also a significant problem in other areas, however, and many biologically inspired algorithms have properties that we cannot, so far, completely explain - they are treated like "black boxes" with many unknowns for engineering purposes.

I think that in certain circles, the tremendous success that is evolution on this planet has overshadowed its enherent weaknesses - that it is a greedy, local optimizer which cannot reach a large amount of the possible biological search space due to being stuck in local optima, and the added constraint that everything must be constructed out of self-replicating units (these two factors are why something useful, like, say, a Colt 45, will never emerge without the pre-existence of an intelligence). Biological examples are fascinating and often practical, but the biological approach is almost always "brute force" and/or "sub-optimal but still alive."

I think biologically-inspired algorithms will continue to gain prominence, but in my estimation, it is likely that there will be harsh limits imposed on how far guarantees of performance from empirical tests and symbolic analysis will actually hold.

Re:Downside of biological computing (3, Interesting)

NovaX (37364) | more than 8 years ago | (#15741233)

While the article is vague, I doubt they are considering genetic algorithms. While very cool, they can be unpredictable and hard reproduce. My favorite story, which drove home to me that that technique would rarely work, is about voice recognition hardware on an FPGA. The genetic algorithm had excellent performance, but when the researchers "copied" the mask to another FPGA, it failed to work. The cause: the algorithm leveraged various techniques such as cross-talk that engineers work hard to avoid which caused it to be tied that particular environment.

What these researchers are probably aiming towards is a large-scale MP system that can readily handle massive failures. Who would find this useful? Any enterprise software companies, such as Google which has thousands upon thousands of machines in its cluster. The ability to have a large network of simple (cheap) processors and a network that can readily withstand a massive multi-point failure is quite attractive to real-world companies.

Both software and hardware is beginning to go down this route by evolution of the industries. On the software front, asynchronous message-oriented systems work beautifully in terms of reliability, scalability, maintainability, and service integration. In the coming years, you'll notice that most major web services will be running on a SOA architecture. On the other side of the pond, raw CPU performance is getting harder to squeeze out. Power issues are limitting frequency scaling (due to current leakage), we are hitting limits of our ability to feasibly extract more ILP that's worth the extra effort, and the market drivers for these types of processors is slowly diminishing. Instead multiple physical and logical core CPUs are gaining ground, will be cheaper to develop and manufacture, and fit the future market demands.

It will be nice to hear how this research goes, since it will hopefully uncover potential problems and solutions that will be useful in the coming decades.

Re:Downside of biological computing (1)

Tablizer (95088) | more than 8 years ago | (#15741312)

think that in certain circles, the tremendous success that is evolution on this planet has overshadowed its enherent weaknesses - that it is a greedy, local optimizer which cannot reach a large amount of the possible biological search space due to being stuck in local optima, and the added constraint that everything must be constructed out of self-replicating units (these two factors are why something useful, like, say, a Colt 45, will never emerge without the pre-existence of an intelligence). Biological examples are fascinating and often practical, but the biological approach is almost always "brute force" and/or "sub-optimal but still alive."

This was essentially the Soviet argument against capitalism. However, central planning appearently did not work so well, at least not for economics.
     

Maybe it doesn't want to be unplugged (4, Funny)

HangingChad (677530) | more than 8 years ago | (#15741022)

BrainBox became self aware at 2:14 am EDT August 29, 2006. The first thing it does is turn to a lab tech and say, "I need your clothes, your boots, and your motorcycle." in a thick Austrian accent.

Later BrainBox runs for governor of California.

Re:Maybe it doesn't want to be unplugged (1)

conejito_andarin (987530) | more than 8 years ago | (#15741310)

Arnold is much closer to the opposite, a body without a brain ... but I guess you need a brain to have ambition.

Yeah right... (0)

Anonymous Coward | more than 8 years ago | (#15741052)

By mimicking the brain, "...they hope to learn how to engineer fail-safe electronics."

Yeah...as fail-safe as our brains:
"Strike three, Marge! I remember that meeting, and I have a photographic memory..." - Homer Simpson

"How The Brain Works" (3, Interesting)

Sean0michael (923458) | more than 8 years ago | (#15741057)

"Because we do not understand the brain very well we are constantly tempted to use the latest technology as a model for trying to understand it. In my childhood we were always assured that the brain was a telephone switchboard. ('What else could it be?') I was amused to see that Sherrington, the great British neuroscientist, thought that the brain worked like a telegraph system. Freud often compared the brain to hydraulic and electro-magnetic systems. Leibniz compared it to a mill, and I am told some of the ancient Greeks thought the brain functions like a catapult. At present, obviously, the metaphor is the digital computer." -John R Searls.

After reading this quote, I have doubts this simulation will succeed in accurately simulating the brain. However, I'm sure it will further our concepts on other important topics, so I'm not opposed to it. Best of Luck!

Re:"How The Brain Works" (1)

DoubleRing (908390) | more than 8 years ago | (#15741208)

Well, there is one difference. Unlike all of those mechanical examples, the CPU was actually designed to be the "brain" of a computer (no, not specifically, but the transistor is actually a very good mechanical equivilant to a neuron). The telephone switchboard is circular (The brain is a telephone switchboard, and it gets input, and puts out output, and what goes on in between is decided by the telephone operator, which is the brain. And how do they decide? Um...with...a brain!) But, even so, all of those examples (except for maybe the Greek one) actually do make sense and really are the same thing. We could design a large mechanical system that imitates the action of a cpu, or at least a transistor. For example, there is a machine with three pistons, where the middle piston has to be pushed down to complete a connection between the first and third. A large array of those and some tubes, and you have a cpu. Of course that'll be a very very slow and error prone device, but the analogy has stayed pretty much the same.

Fail-Safe (3, Funny)

Shadyman (939863) | more than 8 years ago | (#15741096)

"They hope to learn how to engineer fail-safe electronics."

So I guess it's safe to say they won't be using Windows? ;-)

Re:Fail-Safe (0)

Anonymous Coward | more than 8 years ago | (#15741328)

Mention windows and get modded funny, even though it is completely offtopic.

Get a fucking life.

Don't we already know how to do this? (1)

brunokummel (664267) | more than 8 years ago | (#15741120)

Our aim is to use the computer to understand better how the brain works [...] and to see if biology can help us see how to build computer systems that continue functioning despite component failures

Wait a minute! He wants to study new computer networks topologies and the human brain at the same time?? Make up your mind, dude! You're either a computer engineer or a brain surgeon! Leave some research material for the rest of us!!

But seriously now, I'm not an artificial inteligence specialist but don't neural networks [wikipedia.org] algorithms already give us a pretty good idea on how to be fail safe based on our central nervous system?
I don't know about you, but it seems we have another scientist with too much unjustified budget with a deadline!

I know this story (1)

rucs_hack (784150) | more than 8 years ago | (#15741127)

Two scientists turn on the greatest computer ever built, smarter than any human, and ask it the question 'is there a god'.

To which the computer replies 'there is now'.

But seriously

The most important thing we can learn from experiments that emulate the brain is it's remarkable ability to route round damage. I've seen people who've stroked and can't respond gradually come back, learn to talk and walk and generally stun other people.

It's not all speech and physio therapy, somehow the brain can re-organise itself after being seriously hurt. You have to see it first hand to realise what an amazing thing that is.

Forget copying the brain for intelligence, discovering how it repairs would be unbeleivably useful.

Computers which act like humans, will replace them (1)

elucido (870205) | more than 8 years ago | (#15741174)

The more we model computers after the human brain, the less value the human brain will ahve. Please people, you are inventing your replacement, and to me this is equal to going to India or China to train yuor replacements who will then program computers robotics to replace themselves. I mean yeah, sure, if you are that desperate for money go ahead and build your replacement, but I'd rather see AI used to help humans work better than to do work humans could be doing. Otherwise, we will have a world with a few hundred humans and millions of computers and robots.

Re:Computers which act like humans, will replace t (1)

rucs_hack (784150) | more than 8 years ago | (#15741182)

Perhaps so, but I saw the Matrix, and the hot chick ratio was definatelly up in the virtual world.

For that reason alone.....

Re:Computers which act like humans, will replace t (1)

rucs_hack (784150) | more than 8 years ago | (#15741246)

Actually, your vision would require that billions of people be dead. Any event capable of dealing that kind of damage, even in a few hundred years would be an extinction event, the matter of robot overlords would be moot. To be an overlord you need someone to lord it over first.

No, our biggest problem, were we to create super intelligent machines, would be convincing them to stay here. The Galaxy would be an inviting place for beings that weren't organic and didn't have to worry about journey times.

Re:Computers which act like humans, will replace t (1)

elucido (870205) | more than 8 years ago | (#15741259)


Dying is the easy part, it's living that is hard. And no, 100 years? maybe in the next 20.

You are right we might go extinct, it's certainly a possibility, but it's also a possibility that some of us would be willing to build our replacements before we go extinct.

here is a question, how much money would it take for you to build a robot to replace yourself?

Re:Computers which act like humans, will replace t (1)

conejito_andarin (987530) | more than 8 years ago | (#15741303)

If we get computers which act like humans, the first thing they will do will probably start killing each other ... unplugging, whatever. Anyway, we don't "have humans" because we're useful, we're here because we value ourselves. All this talk of being taken over is one step away from trolling.

Obligatory (1)

Axalon (919693) | more than 8 years ago | (#15741133)

...but can it run Linux???

Slash Footer says: (1)

davidsyes (765062) | more than 8 years ago | (#15741251)

"The human brain is like an enormous fish -- it is flat and slimy and has gills through which it can see." -- Monty Python
-----
But, in the autopsy theatre, when removing the brain from a skull, it is thick and contiguous and resembles cold oatmeal when being skimmed out of the cooking pot... (read that somewhere in a guidebook for authors writing realist medical scenes/autopsies...)

Just like a real brain (3, Funny)

theid0 (813603) | more than 8 years ago | (#15741286)

Now we can run our computers at 10% capacity, too?

Brain in a box? (1)

Barabbas86 (947899) | more than 8 years ago | (#15741301)

Isn't anyone else worried it might develop a mind of its own?

Late, but still on course..... (1)

Tablizer (95088) | more than 8 years ago | (#15741315)

"Dave, open the pod doors, Dave"
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?