Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

cancel ×

116 comments

No (1)

For a Free Internet (1594621) | about 3 years ago | (#36683436)

Supercompoters are old news. Everybody uses smart phones now or i-phons duh. Shutr up you goat molister.

Oblig. Question (1)

matt_gaia (228110) | about 3 years ago | (#36683448)

But will it run Lin.... ah, nevermind.

Re:Oblig. Question (1)

ciderbrew (1860166) | about 3 years ago | (#36683516)

Just might run win.... ah, nevermind.

Re:Oblig. Question (2)

Eunuchswear (210685) | about 3 years ago | (#36683708)

Well, since he's ICL professor of computer engineering I'd expect it to run George 3 or VME/B.

But, then again, maybe it'd run BBC Basic?

The Cloudy Concept (1)

camperslo (704715) | about 3 years ago | (#36685748)

Just might run win....

In a future OS built by vast numbers of developers each chipping in 10 lines of code without knowing what the others wrote, having the code of each in its own node should keep more of it running? Reserve 100,000 or so nodes for guest processes.

To use the distributed GPU, each node will output to a rooftop display, viewed by satellite cam zoomed in to encompass the required number of nodes.
Reliability will be ensured by RAIN (redundant array of inexpensive neighborhoods).

Re:Oblig. Question (1)

rbrausse (1319883) | about 3 years ago | (#36683692)

more like "will it run anything anytime?"

FTFpaper:

[..] we have yet to gain access to silicon [..] But we have exposed the design to extensive simulation, up to [..] models incorporating four chips and eight ARM968 processor

I like the grandness of their vision but it would be nice to see at least a real-world version of one node (the 20 processor thingy). How big are the smallest cases needed for such a node? Is it even realistically possible to place 50000 of those within the range limits of Ethernet?

Re:Oblig. Question (1)

black soap (2201626) | about 3 years ago | (#36684146)

So this thing is supposed to simulate some aspect of a brain, and they've so far simulated one small portion of it. I bet I could simulate a simulation^4 of this thing, with a rock.

Re:Oblig. Question (1)

jpapon (1877296) | about 3 years ago | (#36684590)

Not to be pedantic, but I think you're focusing on the wrong thing here. If you're going to build a 50,000 node supercomputer, I'm pretty sure you're not going to let yourself be limited by ethernet cables. I imagine they'd use some sort of fiber network.

The better question is why do you need to simulate a brain in real time? I mean, if you can make something magical happen with a million cores in real time, why can't you just use the plain old Internet and make it happen with a million cores in 1/100th or 1/1000th real time?

Re:Oblig. Question (1)

rbrausse (1319883) | about 3 years ago | (#36685546)

If you're going to build a 50,000 node supercomputer, I'm pretty sure you're not going to let yourself be limited by ethernet cables.

the Ethernet connection is directly quoted from the paper :) and it was more meant as "nice idea, but there are many not addressed practical problems" but as a show stopper argument.

I mean, if you can make something magical happen with a million cores in real time, why can't you just use the plain old Internet and make it happen with a million cores in 1/100th or 1/1000th real time?

sometimes I miss the obvious; this is a *very* interesting thought. the BOINC project "be part of a brain" (cover title,should be changed before release...) would attract my attention

Re:Oblig. Question (1)

amRadioHed (463061) | about 3 years ago | (#36686128)

It's not obvious to me that a brain simulating algorithm would be parallelizable enough to run effectively in chunks distributed over the internet. Or maybe it would but that's just not as fun as a 50,000 node computer.

Re:Oblig. Question (1)

jpapon (1877296) | about 3 years ago | (#36686250)

If an algorithm can be split between 50,000 cores, I see no reason why the distance between the cores makes any difference. Sure, it will be slower, but a core is a core, whether it's in Boston or Bangkok.

Re:Oblig. Question (1)

somersault (912633) | about 3 years ago | (#36686626)

If an algorithm can be split between 50,000 cores, it can run on one core. I see no reason why the number of cores makes any difference. Sure it will be slower, but computation is computation, whether it's happening usefully fast on a 50,000 core behemoth, or pointlessly slow on a 50Mhz 486..

Re:Oblig. Question (1)

jpapon (1877296) | about 3 years ago | (#36686834)

Indeed. Except that there are reasons for trying to build computers that can run computations faster. There's no reason to give ARM the money to make a million core computer when you can't show that it would accomplish anything. If he could make something interesting running at 1/10000th the speed of a "real-time brain", I'd be interested in perhaps increasing that speed by a couple of orders of magnitude. Saying "Let's spend government money on a million ARM processors so we can build something that might do something interesting" is just silly.

Re:Oblig. Question (1)

Verdatum (1257828) | about 3 years ago | (#36683752)

Yeah, and imagine a Beowul...ah, forget it.

Re:Oblig. Question (1)

Shoe Puppet (1557239) | about 3 years ago | (#36683944)

A Beowulf cluster of supercomputers actually sounds pretty awesome.

Re:Oblig. Question (1)

tagno25 (1518033) | about 3 years ago | (#36684242)

Isn't that just a ultracomputer made from supercomputers?

In Soviet Russia (1)

Roachie (2180772) | about 3 years ago | (#36685736)

Beowulf clusters YOU!

Re:Oblig. Question (1)

blair1q (305137) | about 3 years ago | (#36685678)

How about a supercomputer made of a million Beowulf clusters?

Re:Oblig. Question (1)

Shoe Puppet (1557239) | about 3 years ago | (#36686638)

How about a supercomputer made of a million Beowulf clusters?

Imagine a nested Beowulf cluster of those!

Re:Oblig. Question (4, Interesting)

Psion (2244) | about 3 years ago | (#36683990)

No, no, no! Given the intended purpose, the question is: Will it run me?

Re:Oblig. Question (1)

hormesis (1139599) | about 3 years ago | (#36684368)

But will it run Linus Torvalds?

Re:Oblig. Question (1)

Gripp (1969738) | about 3 years ago | (#36684584)

as in windows m.e.? lets not get too crazy....

Re:Oblig. Question (0)

Anonymous Coward | about 3 years ago | (#36686222)

Nothing can run Windows Me...

Tremendous overhead (1)

Tynin (634655) | about 3 years ago | (#36683520)

The overhead in communications has to be stagger at that many nodes. And I suppose it depends on the workload, but I was under the impression that you get serious diminishing returns when you scale that large, to the point it might be faster to have one hundred clusters with 1000 nodes in each. Can anyone speak on how they get around this?

Re:Tremendous overhead (2)

creat3d (1489345) | about 3 years ago | (#36683534)

FTFA: "To overcome this limitation, Furber - along with Andrew Brown, from the University of Southampton - suggests an innovative system architecture that takes its cues from nature. "The approach taken towards this goal is to develop a massively-parallel computer architecture based on Multi-Processor System-on-Chip technology capable of modelling a billion spiking neurons in biological real time, with biologically realistic levels of connectivity between the neurons," Furber explains in a white paper outlining the project."

Re:Tremendous overhead (1)

blair1q (305137) | about 3 years ago | (#36685698)

In other words, it's okay if it runs slow, because your brain does, and makes up for it with parallelism and fuzzy logic.

Re:Tremendous overhead (3, Interesting)

betterunixthanunix (980855) | about 3 years ago | (#36683644)

On the other hand, if you are simulation a brain, I suspect that you don't really need to have fast communication between any two nodes; localized subclusters should communicate quickly, with slower communication between clusters. This wouldn't work for *all* problems, but for the specific problem they mentioned it seems to be a workable solution.

Re:Tremendous overhead (4, Informative)

xkuehn (2202854) | about 3 years ago | (#36686034)

I am not a neuroscientist. As a grad student I do study artificial neural networks, which means that I must also have a little knowledge of neuroscience.

The brain is not a fully connected network. It is divided into many sub-networks. I think it's estimated at about 500k, but don't quote me on that number. These sub-networks are often layered, so if you have a three-layer feed-forward sub-network of 5 cells in each layer, each of these cells has only 5 inputs except for the 5 nodes in the input layer, which connects to other sub-networks. (If there are connections from later layers back to earlier layers, the network is said to be a 'feedback' rather than feed-forward network.) These sorts of networks can be simulated very efficiently on parallel hardware, as a cell mostly gets information from the cells that are close to it.

In short, your suspicion is entirely correct. Moreover, you not only don't need fast connections between many of your processing nodes, most of them don't need to be connected to each other at all.

This is the reason why neural networks are interesting in the first place: that they can be simulated on parallel hardware when we don't know a good parallel algorithm with conventional computing techniques. (If it interests you: another name for neural networks is 'parallel distributed computing'.)

There is a hard limit on the 'order' (think of it as function complexity) of functions that can be computed with a given network. To compute a function beyond that limit, you need to have a larger number of inputs to some cells, thereby increasing the order of the network but making it less parallel. Most everyday things are in fact of surprisingly low order. Fukushima's neucognitron can perform tasks like handwriting recognition with only highly local information.

Re:Tremendous overhead (1)

LWATCDR (28044) | about 3 years ago | (#36684744)

He is using different speeds for the interconnects based on distance to get around the issue. This is not uncommon in supercomputers today where the cores on the node can communicate much faster than the nodes can communicate over infinity band. It sort of reminds me of the Connection machine that used a hyper-cube type system for interconnects.
The problems that this computer is going to try and solve are probably well suited to this type of system after all I am sure that our brains don't have a zero latency switched fabric. The choice of the ARM is probably part patriotic, part personal history, and part sound technical reasoning. All in all I think it is an interesting idea.

I think that setting up a similar system with FPGAs in place of the CPUs might be even more interesting. And if we could get wafer scale integration of FPGA it could be even more interesting but the yields would be terrible.

ARM processors? (2)

Yvan256 (722131) | about 3 years ago | (#36683550)

Shouldn't they be using BRAIN processors? /duck

ICL professor (0)

Anonymous Coward | about 3 years ago | (#36683618)

Shouldn't they be using BRAIN processors? /duck

Well, it's an ICL professor we are talking about. /goose

Re:ARM processors? (2)

cashman73 (855518) | about 3 years ago | (#36684104)

No, but it costs an ARM and a LEG!

Re:ARM processors? (1)

blair1q (305137) | about 3 years ago | (#36685728)

which makes it all the more painful to have to pay through the NOSE for it.

Not Even Close (1)

Scottingham (2036128) | about 3 years ago | (#36683572)

This won't get anywhere near simulating a brain.

Re:Not Even Close (0)

Anonymous Coward | about 3 years ago | (#36683808)

Depends ... they didn't say whose brain!

Re:Not Even Close (1)

Eunuchswear (210685) | about 3 years ago | (#36686114)

Depends ... they didn't say whose brain!

How about Abby? You know, Abby Normal?

Re:Not Even Close (1)

geekoid (135745) | about 3 years ago | (#36684264)

And you base that on... what?

Re:Not Even Close (1)

doublebackslash (702979) | about 3 years ago | (#36684764)

Well, not that I'm a mind reader, but my take on this is that we don't bloody know how those few lbs of grey matter work, how it self organizes, the exacting details of behaviors that drive it, etc etc

On top of that what do you feed a brain? what is it going to do? They'd need to interface it with something that can challenge it, provide meaningful feedback, etc on and on.

Whatever it is they simulate I'd not call it a brain, or a mind. Perhaps a highly complex neural net, but it will be running in slow motion compared to what we have gotten used to. It is, however, a wonderful tool for exploring these problems. This is part of the cycle. Build a model, test the model, fix the model, test the model. Now, it isn't a real brain but it will be an interesting refining force for some ideas.

Re:Not Even Close (1)

Scottingham (2036128) | about 3 years ago | (#36685192)

You pretty much hit it on the nose.

Not only would a brain simulator have to simulate neurons, but also synapses, neurotransmitters, neurotransmitter receptor types, glial cell types (e.g. astrocyte computation), mRNA expression and probably about a library of congress worth of stuff we don't even know about yet.

Re:Not Even Close (1)

amRadioHed (463061) | about 3 years ago | (#36686234)

Some of that may be true, but I doubt we need to simulate all of that to reproduce a brains function. We don't need to simulate transistors to emulate a computer architecture after all.

Re:Not Even Close (1)

blair1q (305137) | about 3 years ago | (#36685804)

They probably won't make it think, but there's a lot of environmental input and output that is mechanistic in nature. We do know how almost all of that works. It's the language and meme thing we're just scratching.

The boundary between symbology and mental process may be illuminated by how this thing behaves. That'd be worth its price.

Re:Not Even Close (1)

Sulphur (1548251) | about 3 years ago | (#36684548)

This won't get anywhere near simulating a brain.

Too true. How about trying to simulate self awareness.

Use (say three) processes to monitor and optimize each other. These would run a virtual machine as a single self aware entity.

Re:Not Even Close (0)

Anonymous Coward | about 3 years ago | (#36684910)

printf("I am printf!");

Re:Not Even Close (0)

Anonymous Coward | about 3 years ago | (#36685358)

I think the keyword here is 'simulate'.

Laplace's demon, Omega, or Les? (1)

G3ckoG33k (647276) | about 3 years ago | (#36683582)

Laplace's demon, Omega, or Les? From my reading this million node supercomputer will be Les.

Cat brains (0)

Anonymous Coward | about 3 years ago | (#36683620)

I thought they already figured out that when you try to simulate something this complex, the model your using tends to become too complex to understand. Thus nothing is really gained. Hopefully they overcame this because I would love to learn how they make it work.

Re:Cat brains (1)

Issarlk (1429361) | about 3 years ago | (#36683874)

That's ok, the simulated brain will be able to understand the model.

Build a mouse brain first. (4, Interesting)

Animats (122034) | about 3 years ago | (#36683624)

OK, a mouse brain has about 1/1000 the mass of a human brain. So build a mouse brain with 1000 ARM CPUs, which ought to fit in one rack, and demonstrate the full range of mouse behavior, from vision to motor control.

I read the paper. It's a "build it and they will come" design. There's no insight into how to get intelligence out of the thing, just faith that if we hook enough nodes together, something will happen.

About 20 years ago, I went to hear Rodney Brooks (the artificial insect guy from MIT) talk about the next project after his reactive insects. He was talking about getting to human-level AI by building Cog [mit.edu] , a robot head and hand that was supposed to "act human". I asked him why, since he'd already done insect-level AI, he didn't try mouse-level AI next, since that might be within reach. He said "Because I don't want to go down in history as the man who created the world's best artificial mouse".

Cog turned out to be a dead end. It was rather embarrassing to all concerned. As one grad student said, "It just sits there. That's all it does."

Re:Build a mouse brain first. (3, Funny)

Nidi62 (1525137) | about 3 years ago | (#36683678)

As one grad student said, "It just sits there. That's all it does."

Sounds like it actually modeled most of human behavior fairly well then.

Re:Build a mouse brain first. (3, Insightful)

Flyerman (1728812) | about 3 years ago | (#36683772)

Without the need to breath, eat, procreate, keep my brain occupied etc... I would certainly do the same.

Re:Build a mouse brain first. (0)

Anonymous Coward | about 3 years ago | (#36683968)

So it's not stupid, it's bored?

Re:Build a mouse brain first. (0)

Anonymous Coward | about 3 years ago | (#36684380)

It's pining for the fjords.

Re:Build a mouse brain first. (1)

HogGeek (456673) | about 3 years ago | (#36684804)

At least that of a government employee...

Re:Build a mouse brain first. (2)

drolli (522659) | about 3 years ago | (#36683890)

Yeah, well. As an interested bystander (a physicist), to me it also seems AI ran into several dead ends because of setting the aims too high. I remember for as long as i can read people predicted that machines which would replace human translators to be around the corner.

IHMO the fundamental mistake is to define something like intelligence on one hand to be very specific, and then hope to reach it by general methods, but taking a few shortcuts, which are not even known yet...

The general rule of thumb which the physicists learned the hard way is that assuming something general and hoping that the shortcuts you need to take to get a result magically fall out once you randomly scatter enough phd students at a subject, is futile. Literally all abstract theories were created the other way round, as a systematically, gradually developed answer to many, many specific problems.

A more practical note about this system is the following: why an ARM processor? to me it seems more efficient to use the block memory in FPGAs and local logic using minimalistic state engines should suffice to simulate this. also the BW should be higher in that case.

Re:Build a mouse brain first. (2)

Animats (122034) | about 3 years ago | (#36684462)

The general rule of thumb which the physicists learned the hard way is that assuming something general and hoping that the shortcuts you need to take to get a result magically fall out once you randomly scatter enough phd students at a subject, is futile.

I heard that exact approach proposed more than once at Stanford in the 1980s.

AI is no longer a hardware problem. Any major data center probably has enough power to do a human brain, if we only knew how to program it. In some areas, like vision, sheer compute power has helped in a big way. Some problems can be hammered into the ground with machine learning techniques and CPU time. In other areas, we're still stuck. There's been almost no progress on "common sense reasoning" in years.

On the other hand, having lived through the "AI Winter" (from 1985 or so, when it became clear expert systems were a dead end, to the late 1990s, when Bayesian statistics and machine learning started to take over), it's nice to see real progress being made again.

It's not clear that we need a different hardware architecture for AI. There used to be much enthusiasm for neural nets, but it turns out that modern machine learning techniques do better on the few problems neural nets can do. The modern approaches are all matrix algebra, and you usually work in Matlab. Much of that stuff is parallelizable, and what you want is more like a GPU than neurons.

Re:Build a mouse brain first. (1)

anwaya (574190) | about 3 years ago | (#36684996)

There used to be much enthusiasm for neural nets, but it turns out that modern machine learning techniques do better on the few problems neural nets can do. The modern approaches are all matrix algebra, and you usually work in Matlab. Much of that stuff is parallelizable, and what you want is more like a GPU than neurons.

Since the stated objective is to simulate a brain, I doubt that marix algebra is going to cut it.

Re:Build a mouse brain first. (1)

blair1q (305137) | about 3 years ago | (#36685968)

Any major data center probably has enough power to do a human brain, if we only knew how to program it.

I think we're underestimating what these guys know about how to program it. If they can get the architecture to be more like a brain, it may allow them to implement accurate simulations of the neuronal processes we see in a brain, and then start to work on the processes those neurons are used for.

I highly doubt this will be the only iteration of the hardware needed to get it to think, even at a small-furry-mammal level, but getting it to do a few self-aware things could be very helpful.

Re:Build a mouse brain first. (0)

Anonymous Coward | about 3 years ago | (#36684620)

I remember for as long as i can read people predicted that machines which would replace human translators to be around the corner.

That day is here. Plenty of spammers and frauds use Babelfish and Google Translate instead of human translators.

Re:Build a mouse brain first. (1)

dr.newton (648217) | about 3 years ago | (#36684136)

Even if they fail to produce anything interesting, that in itself will be an interesting result.

There are likely a number of assumptions about intelligence as an emergent behaviour of non-quantum physical phenomena that could be invalidated by the failure of this experiment.

"Brains can't work according to such-and-such a principle, because if that were true, Furber would've succeeded."

Re:Build a mouse brain first. (0, Flamebait)

geekoid (135745) | about 3 years ago | (#36684438)

Great, you read the paper. DId you laso read the paper and studies that lead to thins? did you know they have already simulated a few neurons? and the activity looked like the human brain? no? STFU.

IN our likely life time, we will have a simulated brain. This will just be one piece of a larger model.
Once we have the model, we will be able to do a lot of things. What to simulate damage and look at long term effects? Speed up the model, get a years worth of data in days. Want to studies some thing closer? slow it down. Want to test environmental impacts? spin off several copies and simulate different environments.

We will learn how we think, how to enhance out perceptions, new drugs, better treatments, and most importantly, remove the mystical thinking the surrounds the brain/mind.

Don't confuse AI with brain simulation, they are different things. Cog is a robotic apparatus designed to let people learn real world application and testing of AI functions. It does stuff. In building it they actually learned quite a lot, so it wasn't a 'dead end' what ever the hell that means in regard to science.
If memory serves, it did all kinds of things. Simulate sawing, for one.

Re:Build a mouse brain first. (-1)

Anonymous Coward | about 3 years ago | (#36685302)

Yeah keep talking about it like it's a great thing. It will never be used in a big brother way or abused at all. That is not scary at all. IN THE NAME OF SCIENCE!!! right? Fuck you

Re:Build a mouse brain first. (1)

Anonymous Coward | about 3 years ago | (#36684476)

Nowhere in TFA does it say they're trying to build an AI or simulate the human brain. They're aiming for 1b neurons, which would be a tiny fraction of the human brain, and they're particularly interested in proving the ability to maximize supercomputing capacity while minimizing energy use.

Re:Build a mouse brain first. (0)

Anonymous Coward | about 3 years ago | (#36685390)

Cog, a robot head and hand that was supposed to "act human".

Cog turned out to be a dead end. It was rather embarrassing to all concerned. As one grad student said, "It just sits there. That's all it does."

Since it isn't attached to an arm, its hand cannot reach its crotch, so all it can do is think about sex, not act on it.

Re:Build a mouse brain first. (1)

Rolgar (556636) | about 3 years ago | (#36685452)

It's one thing to copy human a physical human behavior, or to even be able to calculate something like the traveling salesman problem. It's another thing entirely to develop the ability to start an intelligence that naturally absorbs sensory on educational information, identifies patterns and holes in information, turns that into a question, and then follows the correct steps to learn what is unknown. Between us and a machine that can do that, we'd still have personality and drives (survival, success, companionship, etc.), and those may not even be necessary to have intelligence.

Until the AI scientists realize that mimicking human senses and mobility won't lead automatically to human-like intelligence, they won't get any closer to machines that can replace human thought.

I had a philosopher instructor a decade ago. He said the difference between computers and humans is that we are more than the sum of our parts (this drew upon the idea of Platonic forms). That is, at an atomic level, we are basically a mixture of carbon and water, but we can do more than a corpse made of the same things, which would be our natural physical state. We can think new thoughts, build new tools, and to do new things. A computer, at least ones we can currently build can do none of those things. Someone thinking thoughts can use a computer to make tools, or do things more efficiently, but until a computer can accomplish any of these tasks, it won't be intelligent:

---The computer can look at the U.S. budget deficit, and come up with a balanced budget that will make most citizens reasonably happy, or figure out how to structure a health care bill that will provide every one with excellent care at an affordable price. You might think that the failure of the U.S. government to do so means that nobody can, but that is a failure of politics, not intelligence.

---When computers can think of new problems and come up with the solution or write software to automate a task without an outside designer and programmer, then the AI field will be on the right track.

---When a computer can come into contact with a piece of art (photo, literature, comic), understand it, and have it mean something that the computer can learn from, and then examine something else, and apply the lessons learned to this new thing, and sort among many different things its learned to figure out which ones are most similar.

In any of these, the computer will have gone beyond it's programming to become autonomously intelligent.

Humans are open ended. Computers, robots and software are designed to solve a specific problem, with known limits. Currently machines are made to be generalists, that is to perform tasks that are not thought of when made, if somebody thinks of something new, but they don't have the ability, within themselves, to go beyond what they are explicitly programmed to do. Designing a computer that has the open ended ability to think they way we do is something we are still nowhere close to solving yet. When I read the grad student's comment at the end of your comment, it is obvious to me now, after years of observing failure of fellow Slashdotters failing to identify this issue as the roadblock in the way of AI progress in every Slashdot discussion on AI I've ever witnessed, such that it's become automatic for me to expect this lack of comprehension.

Re:Build a mouse brain first. (1)

DarkOx (621550) | about 3 years ago | (#36686554)

---The computer can look at the U.S. budget deficit, and come up with a balanced budget that will make most citizens reasonably happy, or figure out how to structure a health care bill that will provide every one with excellent care at an affordable price. You might think that the failure of the U.S. government to do so means that nobody can, but that is a failure of politics, not intelligence.

I doubt it a computer could do a good job at maxing the desired out puts and minimizing the required inputs. Humans have a different sense of fairness though and how we subjectively feel about it has a lot to do with how happy with it we are.

---When a computer can come into contact with a piece of art (photo, literature, comic), understand it, and have it mean something that the computer can learn from, and then examine something else, and apply the lessons learned to this new thing, and sort among many different things its learned to figure out which ones are most similar.

Again I am not so sure a computer will be able to respond our art they way we do. I am not sure another intelligent biologic organism, say from space, could either. Its our art, and it reflects on some basic levels experience we all share. A computer won't see an image with same analog input devices we humans use, When it reads our writing it won't be able to know what it feels like to draw breath with human lungs. Will it be able to pick out facts and chronology sure. Might a computer have an idea of what its like to watch the sun rise someday? Very possibly but it will be a very different idea than the one you an I have.

Re:Build a mouse brain first. (1)

seven of five (578993) | about 3 years ago | (#36685708)

Cog turned out to be a dead end. It was rather embarrassing to all concerned. As one grad student said, "It just sits there. That's all it does."

Well, it's just a cog in a big machine.

Re:Build a mouse brain first. (0)

Anonymous Coward | about 3 years ago | (#36687598)

"I read the paper. It's a "build it and they will come" design. There's no insight into how to get intelligence out of the thing, just faith that if we hook enough nodes together, something will happen."

Strange behaviour for the man who was the principal designer of the ARM 32bit RISC processor. His aims seem to state a less grand end goal than most reports suggets though:

How can massively parallel computing resources accelerate our understanding of brain function?
How can our growing understanding of brain function point the way to more efficient parallel, fault-tolerant computation?

Essentially the computing power will be just a huge bread board simulator.

Link to research paper (0)

Anonymous Coward | about 3 years ago | (#36683656)

is conveniently behind a user/pass authentication.. I'm interested only in the research paper.

Slower than current fastest computer (0)

Anonymous Coward | about 3 years ago | (#36683680)

Current fastest computer uses 640k Sparc cores already. These cores are lot faster than ARM cores. So this planned computer would be slower than current computer. The benefit will only come if it costs less and/or takes less space/power.

Re:Slower than current fastest computer (0)

Anonymous Coward | about 3 years ago | (#36683768)

640k should be enough for... nevermind.

Braindead Cray? (0)

Anonymous Coward | about 3 years ago | (#36683700)

This looks like a Cray XT with slow ARMs subbing in for current AMD processors.

Putting the cart before the horse (5, Insightful)

Okian Warrior (537106) | about 3 years ago | (#36683736)

This makes sense, how?

It's like trying to simulate a computer by wiring 5 million transistors. Without a deep understanding of how computers work and a plan for implementation, the result will be worthless.

I see this all the time in AI strategies. Without no deep understanding of AI, the project implements bad assumptions.

Some examples: no way to encode the adjacency information, a fixed internal encoding system which cannot change (ie - a chess program that can't learn checkers), linear input->process->output models, and so on.

Before building a system with a million processors capable of simulating the brain, how about we design an algorithm that embodies the simplest possible AI?

Re:Putting the cart before the horse (3, Insightful)

PPH (736903) | about 3 years ago | (#36683794)

Typical IT project philosophy: I'll go to the customer and try to get some requirements. The rest of you, start coding.

Re:Putting the cart before the horse (1)

EraserMouseMan (847479) | about 3 years ago | (#36684006)

Yep. Leave the decision making to us. You just write the code and don't ask questions.

Re:Putting the cart before the horse (1)

bberens (965711) | about 3 years ago | (#36685184)

Typical IT project philosophy: I'll go to the customer and try to get some [strike]requirements[/strike] money. The rest of you, start coding.

FTFY

Re:Putting the cart before the horse (1)

PPH (736903) | about 3 years ago | (#36685674)

We don't start coding until the check clears. We don't stop adding features until the money runs out. But there is no relationship with requirements.

Re:Putting the cart before the horse (1)

Anonymous Coward | about 3 years ago | (#36683872)

The problem with AI seems to be a lack of RI. You can't make an artificial anything if you don't know how the real something works.

Re:Putting the cart before the horse (0)

Anonymous Coward | about 3 years ago | (#36683986)

Exactly! They need to get some computer scientists and AI researchers involved.

Re:Putting the cart before the horse (1)

jpapon (1877296) | about 3 years ago | (#36684662)

I agree that the idea is somewhat silly, but I think it's more like "trying to simulate a computer by building a 5 million transistor FPGA". The connections between the cores isn't hardwired, it's configurable... so you could indeed make a "brain" out of it. The real problem is that there's no point in building such a massive system to simulate a brain in real-time. Simulate it in 1/1000th time using a much less expensive system first. If THAT works, maybe we can talk.

Re:Putting the cart before the horse (0)

Anonymous Coward | about 3 years ago | (#36685934)

Perhaps simulating a human brain isn't a reasonable goal. But simulating intelligence shouldn't be too farfetched. The brain has a lot to do besides be intelligent, but a computer doesn't have those concerns. We already have artificial neural networks doing some pretty cool things.

Re:Putting the cart before the horse (1)

blair1q (305137) | about 3 years ago | (#36685990)

Project for you: identify the bad assumptions in their model without building one and trying it out.

Okay. Go.

Re:Putting the cart before the horse (0)

Anonymous Coward | about 3 years ago | (#36686552)

TFA talks about a biologically inspired computing architecture and then suggests simulation of spiking neuron models as a possible application, it doesn't mention AI. The simulation of the brain as a biophysical entity, either at the spiking neuron level or other levels of description, is a useful approach to answering a range of scientific questions about brain dynamics, and certainly doesn't require that the simulation ponder its own existence.

Whose brain? (1)

Alien Being (18488) | about 3 years ago | (#36683844)

Abby someone?

Re:Whose brain? (0)

Anonymous Coward | about 3 years ago | (#36686090)

Abby someone?

Abby normal

I forsee ... (1)

PPH (736903) | about 3 years ago | (#36683910)

...battery capacity problems.

Re:I forsee ... (1)

EraserMouseMan (847479) | about 3 years ago | (#36684048)

They're using the Apple model. All brains will have a non-removable battery built in. When the battery dies the brain dies and you go buy another one.

The number of nodes is meaningless (1)

msgmonkey (599753) | about 3 years ago | (#36683932)

The number of nodes and processing power per node is meaningless unless they can connect them together in a similar fashion to the brain, sure they mention a "brain like" arrangement but the reason our brains are so sophisticated is not due to processing power but due to organisation. Brains are slow, really really slow but the parallelism and connectivity is beyond anything we can build at the moment and that is why we keep on failing on AI. An example is adding two numbers together, easy to do for a processor yet difficult to do using neural nets.

Re:The number of nodes is meaningless (1)

FlyingGuy (989135) | about 3 years ago | (#36684174)

Correct. We don't even understand how the human brain is "cognitive" much less how the information is stored or retrieved. Almost everything we know is an assumption at its base. Yes we have some ideas about which regions of the brain control certain functions but we have not a clue as to how those things work.

Re:The number of nodes is meaningless (1)

geekoid (135745) | about 3 years ago | (#36684688)

Yes we do have a clue, several in fact. we even have an incredibly simple model(few neurons).

We don't need to understand something to simulate it. It certainly helps. People in a factory can assemble a plane the works perfectly well, but never even have heard of bernoulli's principle.

This sort of work is need so we can define 'cognitive' with more accuracy.

We know a hell of a lot more then you imply in your post. The parent clearly doesn't know what he is talking about and did even bother to read the story, much less the pdf.

ftp://ftp.cs.man.ac.uk/pub/amulet/papers/SBF_ACSD09.pdf [man.ac.uk]

Re:The number of nodes is meaningless (0)

Anonymous Coward | about 3 years ago | (#36686038)

Poor analogy. Assembling planes is a well understood process. We humans created it! Assembling a brain is not. Nature did that. And the process evolved over millions of years, with no planning or foresight, in a messy fashion.

You insenSitive clod.. (-1)

Anonymous Coward | about 3 years ago | (#36684408)

Misleading Headline (1)

SpaFF (18764) | about 3 years ago | (#36684730)

According to the research paper the goal is a million *processor* computer, not a million *node* computer. Each node described in the paper is made up of 20 ARM processors, so it would technically be a 50,000 node computer.

Re:Misleading Headline (1)

blair1q (305137) | about 3 years ago | (#36686042)

Are those single- or multi-core ARM units? (does ARM even do multi-core units?)

And really, if you count all the processor units in a graphics chip, there are probably some computers that could count several million individual processors in their architecture right now.

Re:Misleading Headline (1)

1729 (581437) | about 3 years ago | (#36686158)

The headline is useless. Proposing a million-core computer isn't news, since there's a 1.6 million core computer about to be deployed [wikipedia.org] . The headline should reflect what they're planning to do with this machine.

overly ambitious? (0)

Anonymous Coward | about 3 years ago | (#36685030)

Would seems to be a rather expensive crap shoot for a piece of hardware that has the potential of being able to do absolutely nothing.

Re:overly ambitious? (0)

Anonymous Coward | about 3 years ago | (#36685236)

Would seems to be a rather expensive crap shoot for a piece of hardware that has the potential of being able to do absolutely nothing.

And how is this different than the expense of keeping your brain alive?

Maybe your right. AI is a complete waste of time and energy. We shouldn't even try. For that matter, what's the point of any research. We know enough already. We should just sit on our asses and try nothing because the brilliant minds here on /. think that AI can never be achieved and that makes it a fact.

But (-1)

Anonymous Coward | about 3 years ago | (#36686062)

Can it run crysis on max settings?

armchair scientists (0)

Anonymous Coward | about 3 years ago | (#36686760)

I love all the armchair scientists on slashdot. Why don't you stick to writing your configuration files and bash scripts?

An once again quantity is confused with quality (1)

gweihir (88907) | about 3 years ago | (#36687004)

The problem in simulating a brain is not computing power. It is software. This is a worthless publicity stunt.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...