Scientists to Build 'Brain Box' 187
lee1 writes "Researchers at the University of Manchester are constructing
a 'brain box' using large numbers of microprocessors to model the way networks of neurons interact. They hope to learn how to engineer fail-safe electronics. Professor Steve Furber, of the university school of computer science, hopes that biology will teach them how to build computer systems. He said: 'Our brains keep working despite frequent failures of their component neurons, and this "fault-tolerant" characteristic is of great interest to engineers who wish to make computers more reliable. [...] Our aim is to use the computer to understand better how the brain works [...] and to see if biology can help us see how to build computer systems that continue functioning despite component failures.'"
Fuber? (Score:2, Funny)
Re:Fuber? (Score:1)
Teramac, by Hewlett-Packard (Score:3, Informative)
This sort of thing [highly parallelizable, highly fault-tolerant computing] was done more than a decade ago, at Hewlett-Packard, in the old Teramac group.
Background here [hp.com], here [kinetic.org], here [byu.edu], etc.
Testing for fault tolerance (Score:5, Funny)
Re:Testing for fault tolerance (Score:1)
Re:Testing for fault tolerance (Score:2)
Could these brains be taught followance?
Fault tolerance with fuzzy logic already done (Score:3, Informative)
The system was designed around a set of fuzzy computing boards. When one of the boards was removed, the control degraded, but still continued to function. Of course if some critical boards (eg direct attached to outputs) were removed, the system would fail immediately.
Re:Testing for fault tolerance (Score:3, Interesting)
That's quite a funny post but it brings me to an (IMHO) interesting point - given a virtual "brain" capable of performing a certain task, can specifically targetting "damage" to the system result in creativity? Many of the most creative minds in our history got their inspiration in part due to mind-altering chemicals...
Re:Testing for fault tolerance (Score:5, Interesting)
Re:Testing for fault tolerance (Score:2)
From the article linked by the GP:
The mechanism you discribe is being used in Thales's system. Someone still has to train the critic networks though.
Re:Testing for fault tolerance (Score:2, Funny)
Re:Testing for fault tolerance (Score:2)
Pray to god that they fail. (Score:2)
Re:Pray to god that they fail. (Score:2)
Otherwise, losing out to the main brain would be all in vain.
(OK, that was Baaaaaahhhhdddd)
Re:Pray to god that they fail. (Score:3, Interesting)
The parent post to this one really hit a profound reality. As we render human beings obsolete as we are progressively doing, we face a horrid reality.
The real issue of the 21st century is: Will be build a world where human beings serve the industrialists machines, or will we build a world where the industrialists machines serve human beings. All jokes about serving humans come to mind. This decision will be made. If it is made by ignorance, human beings will serve the industrialists machines. If it is
Re:Testing for fault tolerance (Score:2)
BTW, what would be better: Series or Parallel links for the gray matter?
How would the "juices be kept flowing" in such an arrangement?
How would FLOPS of gray matter be calculated in a meaningful (err, umm, (thoughtful") way?
What happens if a dyslexic or autistic brain is linked in that collective?
What happens if a murderous or anorexic or bulimic brain or two are in the mix?
Copper top or zinc?
Plasma links or liquid crystalline
Two Separate Goals (Score:3, Insightful)
Re:Two Separate Goals (Score:1)
Re:Two Separate Goals (Score:2, Interesting)
Man, meet your replacement. (Score:2)
Don't tell me humans will be needed to program and repair because these self healing robots are being invented to prevent that.
I think its exciting (Score:2)
Years from now when computers are 1000x faster and are our overlords, we can look back at this experiment... and say thanks a lot assholes! I kid, I kid.
http://religiousfreaks.com/ [religiousfreaks.com]That's a good thing. (Score:2)
Be realistic (Score:2)
Hardware? (Score:3, Insightful)
Re:Hardware? (Score:3, Interesting)
More realistically, perhaps they have already simulated some stuff and now want to scale it up drastically in size and speed. There isn't really enough detail in the article to tell how custom this is going to be. It could be anything from a Sun Niagara or a Connection Machine up to some custom designed parallel FPGA monster.
Re:Hardware? (Score:2)
Re:Hardware? (Score:2)
Re:Hardware? (Score:2)
void foo(char *p)
{
if (p == NULL)
*p = '1';
}
int main()
{
retur
Re:Hardware? (Score:2)
Right language? (Score:2)
Borland Delphi, for instance, offers a compiler switch to activate bounds checking or "range checking" as the Delphi online help calls it. Activating range checking will catch the first of your examples, and there is a convenient checkbox in the project settings to do it.
Admittedly, there is no mechanism in Delphi that will catch your second example. But then again, most problems can be solved without pointers. In
Re:Right language? (Score:2)
Re:Hardware? (Score:2)
Re:Hardware? (Score:2)
Re:Hardware? (Score:2)
Redundent department of redundancy. (Score:1, Redundant)
I believe it's called redundancy. Seriously.
Re:Redundent department of redundancy. (Score:2)
I didn't RTFA but "educated sense" suggests to me the aim is to tolerate multiple faults without having large changes in capacity or wasting resources.
Re:Redundent department of redundancy. (Score:2)
resources unimportant for mission critical systems (Score:2)
Whether horsepower is going unused is not important for mission-critical systems. If you're running an Oracle database that manages data that the life of your company (or soldiers in the field) depends on, the thing that matters is if you lose data integrity. You'll assign a dozen redundant servers if it minimizes the chance that a hardware failure will mean downtime.
In military applications, you want to maintain operation of a computer through extreme duress. If a projectile punctures the hull of a tank
Re:Redundent department of redundancy. (Score:3, Interesting)
Re:Redundent department of redundancy. (Score:2)
Computer: "Oops."
They'll find out when they stop using Windows (Score:3, Insightful)
There are a bunch of tools and specs out to get a fully (multiple) redundant system. You can have >1 server in any type of configuration, sharing any type of resource and when one fails, the other takes over, fully redundant.
Re:They'll find out when they stop using Windows (Score:1)
My Brainbox (Score:2, Interesting)
Re:My Brainbox (Score:2)
50% Interesting
50% Overrated
Maybe I'm giving the TrollMod brain too much credit.
Hmm... (Score:2)
Re:Hmm... (Score:2)
We had tried all kinds of rules-based and curve/data-fitting algorithms to calibrate the camera's colorspaces between input targets and output devices. Then we just made it feedback between the targets and devices, storing de/convolution kernels when the data converged stably. We talked about calibrating to all kinds of sensors/media, but we moved on
Re:Listen up Idiot... (Score:2)
Re:Listen up Idiot... (Score:2)
I think posting as AC has made you kinda obtuse.
Brain Box? (Score:1)
Re:Brain Box? (Score:1)
Re:Brain Box? (Score:2)
And I am ashamed.
it took long enough (Score:3, Insightful)
Re:it took long enough (Score:2)
# of neurons needs to equal # of cpu's (Score:3, Interesting)
We have made big advances in this area, but having even a crude prototype to LT. Data ( Star Trek: Next Generation) is still quite a ways off.
However, I expect that we will eventually solve this problem. I just hope that we do in my lifetime- that would be way cool! (work fast, I'm 49!)
Re:# of neurons needs to equal # of cpu's (Score:2)
Re:# of neurons needs to equal # of cpu's (Score:2)
It takes about 15-20 years to train a human to the point of usefulness. The first couple of those years are spent cooing and drooling. An effective synthesis of the human brain in hardware would be expected to take about as long and about as much effort to train before becoming useful. Sure, at that point you could duplicate it relatively easily - but who is willing to spend years making baby noises into a microphone in the hope that *this* time
Re:# of neurons needs to equal # of cpu's (Score:2)
The interrupting part is the most complicated aspect. It requires having all possible options available at all times and ready to
Re:# of neurons needs to equal # of cpu's (Score:2)
*slumps* Yeah, I know. I have no life.
Skynet (Score:1)
Re:Skynet (Score:2)
Braniac (I'm gonna fuckin' KILL the board of directors for putting my brain around these ex-plants....)
Inter-neuron Communication (Score:2, Funny)
Re:Inter-neuron Communication (Score:1)
Nope, hence the phrase "In one ear and out the other" aka packet loss...
Re:Inter-neuron Communication (Score:2)
Yes. And NetBEUI when high on dope.
Reliability... (Score:1)
While I agree that the human brain has many virtues of computing to teach us; lateral/creative thought, massively parallel processing etc. I have never counted "reliability" among them - it is an interesting concept.
OTOH, the failure rate at the end of the manufacturing process for CPUs is probably higher than the defect rate in human brains... err, I hope.
Re:Reliability... (Score:2, Interesting)
"Works"? I think they mean "Behaves" (Score:2)
We know it's not a binary digital stored program computer.
They should have some success modeling how the brain behaves, though.
Maybe then they can contribute to the real question of how the mind works.
(Hey, wait a minute - this isnt thos two white mice again, izzit?)
their "brain box" (Score:1)
Academia dupe? (Score:4, Informative)
Since when is this a new idea? I heard about people doing stuff like this years ago.
http://neuralnets.web.cern.ch/NeuralNets/nnwInHepH ard.html [web.cern.ch] t ion3_5.html [particle.kth.se] A M.html [ucl.ac.uk] a re.html [kcl.ac.uk]
http://www.particle.kth.se/~lindsey/elba2html/sec
http://www.cs.ucl.ac.uk/staff/D.Gorse/research/pR
http://www.kcl.ac.uk/neuronet/about/roadmap/hardw
Re:Academia dupe? (Score:2)
Can't do it with microprocessors (Score:2)
Downside of biological computing (Score:5, Insightful)
One of the main drawbacks of human engineering is the need for certainty, which often prohibits the use of many high-efficiency stochastic algorithms (especially for things like mesh communication) in conservative industries, like the US defense industry. This is also a significant problem in other areas, however, and many biologically inspired algorithms have properties that we cannot, so far, completely explain - they are treated like "black boxes" with many unknowns for engineering purposes.
I think that in certain circles, the tremendous success that is evolution on this planet has overshadowed its enherent weaknesses - that it is a greedy, local optimizer which cannot reach a large amount of the possible biological search space due to being stuck in local optima, and the added constraint that everything must be constructed out of self-replicating units (these two factors are why something useful, like, say, a Colt 45, will never emerge without the pre-existence of an intelligence). Biological examples are fascinating and often practical, but the biological approach is almost always "brute force" and/or "sub-optimal but still alive."
I think biologically-inspired algorithms will continue to gain prominence, but in my estimation, it is likely that there will be harsh limits imposed on how far guarantees of performance from empirical tests and symbolic analysis will actually hold.
Re:Downside of biological computing (Score:4, Interesting)
What these researchers are probably aiming towards is a large-scale MP system that can readily handle massive failures. Who would find this useful? Any enterprise software companies, such as Google which has thousands upon thousands of machines in its cluster. The ability to have a large network of simple (cheap) processors and a network that can readily withstand a massive multi-point failure is quite attractive to real-world companies.
Both software and hardware is beginning to go down this route by evolution of the industries. On the software front, asynchronous message-oriented systems work beautifully in terms of reliability, scalability, maintainability, and service integration. In the coming years, you'll notice that most major web services will be running on a SOA architecture. On the other side of the pond, raw CPU performance is getting harder to squeeze out. Power issues are limitting frequency scaling (due to current leakage), we are hitting limits of our ability to feasibly extract more ILP that's worth the extra effort, and the market drivers for these types of processors is slowly diminishing. Instead multiple physical and logical core CPUs are gaining ground, will be cheaper to develop and manufacture, and fit the future market demands.
It will be nice to hear how this research goes, since it will hopefully uncover potential problems and solutions that will be useful in the coming decades.
Re:Downside of biological computing (Score:2)
Re:Downside of biological computing (Score:2)
Currently, when you want redundancy, you have to build 100% replicas, do 100% redundant computation, then have heart-beat monitors which take down failing nodes and notify a human to handle the failur
Re:Downside of biological computing (Score:2)
Re:Downside of biological computing (Score:2, Interesting)
I understand what you are saying. However, there are variations that can avoid this problem to some extent. For example, genetic programming [wikipedia.org], rather than genetic algorithms [wikipedia.org]. The main difference is that where genetic algorithms are used directly to find a solution, genetic programming is used to crea
Maybe it doesn't want to be unplugged (Score:5, Funny)
BrainBox became self aware at 2:14 am EDT August 29, 2006. The first thing it does is turn to a lab tech and say, "I need your clothes, your boots, and your motorcycle." in a thick Austrian accent.
Later BrainBox runs for governor of California.
"How The Brain Works" (Score:4, Interesting)
After reading this quote, I have doubts this simulation will succeed in accurately simulating the brain. However, I'm sure it will further our concepts on other important topics, so I'm not opposed to it. Best of Luck!
Re:"How The Brain Works" (Score:2)
Fail-Safe (Score:3, Funny)
So I guess it's safe to say they won't be using Windows?
Slash Footer says: (Score:2)
-----
But, in the autopsy theatre, when removing the brain from a skull, it is thick and contiguous and resembles cold oatmeal when being skimmed out of the cooking pot... (read that somewhere in a guidebook for authors writing realist medical scenes/autopsies...)
Just like a real brain (Score:3, Funny)
Re:Just like a real brain (Score:2)
some amusing calculations (Score:5, Interesting)
number of neurons in the brain: 100 billion
http://hypertextbook.com/facts/2002/AniciaNdabaha
transistor count per CPU: ~300 million
http://www.anandtech.com/cpuchipsets/showdoc.aspx
average synaptic connections per neuron: 7000
http://en.wikipedia.org/wiki/Neuron [wikipedia.org]
total number of synapses: 100 to 500 trillion
since a 'calculation' for one artificial neuron mostly involves a summation of weights, we can view one total step as 2 X the number of synapses we wish to analyze. or 200 - 1000 trillion calculations for one step. by step i mean summing all inputs and pushing the result to an output for each neuron.
http://en.wikipedia.org/wiki/Artificial_neuron [wikipedia.org]
fastest computer in the world FLOPs: 280 trillion
http://en.wikipedia.org/wiki/Blue_Gene [wikipedia.org]
pentium 4 FLOPs: 40 GFLOP
using the fastest computer in the world 1 step would only take around 1 - 5 seconds, not counting storing all of that information.
http://en.wikipedia.org/wiki/Blue_Gene [wikipedia.org]
so how fast do we think? well i couldn't find anything on this so lets get a quick estimate. the average neuron is
so assuming our computers could network instantly, and store the data used instantly, we would need 3-15 trillion Blue Gene supercomputers to simulate the human brain in real time. or if we are using pentium 4s we would only need 21-105 trillion pentium 4s.
man thats a lot of cpus.
number of computers in the world: ~300 million
http://www.aneki.com/computers.html [aneki.com]
guess at average FLOPs per computer: 40 GFLOPs
total FLOPs of worlds personal computers: 1.2 PFLOPs
time to calculate one brain step if all computers in the world were networked:
using moores law, when will a single computer be fast enough to simulate the human brain in real time?
200-1000 trillion calculations per step = ~600 trillion every 3.3ns = 181x10^18 or 181exeFLOPs
181exaFLOPS / 40GFLOPS = 2^n, n=32
32*18mo = 48 years based on personal computer technology
or 28 years based on supercomputer technology
of course a real neural network will contain highly parallel processing and using a specific chip design we will probably be able to simulate a brain much sooner, perhaps in the order of 10-20 years.
Re:some amusing calculations (Score:3, Informative)
so how fast do we think? well i couldn't find anything on this so lets get a quick estimate. the average neuron is .1m in length .1 / c = 3.3x10^-10 or 333 picoseconds. now lets add in some delay for the chemicals in the neurons to do their thing, this is probably much slower than the electrical impulse, so lets say 3.3 nanoseconds.
This is a drastic underestimate of the computational timescale for neurons in the brain. The error on the back of your envelope is that chemical diffusion is a fundamental p
Re:some amusing calculationst (Score:3, Informative)
Re:some amusing calculations (Score:3, Informative)
When I was studying experimental psychology, I calculated the brain's effective "clock speed" as about one tick per 10ms, or 100Hz. Within a factor of two. Of course the brain is immensely parallel and every nerve cell is like a separate "core", so it's still very powerful. What slows it down is using chemical diffusion to pass signals across junctions (synapses). Back in the day, some of our potential protozoan ancestors already had light receptors and emitters - if only they'd u
Re:some amusing calculations (Score:3, Interesting)
For simple addition tasks, an "operation" can take seconds.
For calculating the kinetics of arm motion needed to juggle 5 balls, there aren't even any "operations" to clock the speed of. It's just a continuous dynamical system.
Re:some amusing calculations (Score:3, Informative)
A synapse is not a FLOP. Dendrites are computational devices in themselves, and a synaptic activation at one point along the dendritic branch will affect how a synaptic activation elsewhere affects the soma. Also, when neurons fire, the spikes propagate backwards down the dendrite to allow the synapses to learn. Simulating this to even a crude degree of accuracy requires a compartmental model of the den
Re:some amusing calculations (Score:3, Funny)
Here I am, brain the size of a planet, and they ask me to simulte a human mind. Call that job satisfaction, 'cause I don't...
Re:some amusing calculations (Score:2)
Knowing *WHAT* to model about a neurons behavior is the important part in order to be able to figure out the OP's calculation with any degree of accurracy,
So, how is this different from the Internet? (Score:2)
Researcher is outside his field of expertise (Score:4, Insightful)
We just accept that many (most?) brain functions don't "keep working", fortunately without worrying about it too much.
Has he never heard of hot-swappable parts? (Score:2)
Thinking Machines (Score:2)
The company was kept alive by DARPA contra
Computers which act like humans, will replace them (Score:2)
Re:Computers which act like humans, will replace t (Score:2)
Dying is the easy part, it's living that is hard. And no, 100 years? maybe in the next 20.
You are right we might go extinct, it's certainly a possibility, but it's also a possibility that some of us would be willing to build our replacements before we go extinct.
here is a question, how much money would it take for you to build a robot to replace yourself?
Re:Computers which act like humans, will replace t (Score:2)
Prove it. (Score:2)
Re:Don't we already know how to do this? (Score:2)
Not really. Neural networks are geared more towards machine learning. Given a sample set of data to be recognized and the correct answer they can evolve a network that will, given an unknown sample, return the correct answer with high probability. The loss of a neuron in the network would be disruptive and would impact the model