Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

MIT Creates Chip to Model Synapses

Unknown Lamer posted more than 2 years ago | from the man-is-obsolete dept.

Hardware 220

MrSeb writes with this excerpt from an Extreme Tech article: "With 400 transistors and standard CMOS manufacturing techniques, a group of MIT researchers have created the first computer chip that mimics the analog, ion-based communication in a synapse between two neurons. Scientists and engineers have tried to fashion brain-like neural networks before, but transistor-transistor logic is fundamentally digital — and the brain is completely analog. Neurons do not suddenly flip from '0' to '1' — they can occupy an almost-infinite scale of analog, in-between values. You can approximate the analog function of synapses by using fuzzy logic (and by ladling on more processors), but that approach only goes so far. MIT's chip is dedicated to modeling every biological caveat in a single synapse. 'We now have a way to capture each and every ionic process that's going on in a neuron,' says Chi-Sang Poon, an MIT researcher who worked on the project. The next step? Scaling up the number of synapses and building specific parts of the brain, such as our visual processing or motor control systems. The long-term goal would be to provide bionic components that augment or replace parts of the human physiology, perhaps in blind or crippled people — and, of course, artificial intelligence. With current state-of-the-art technology it takes hours or days to simulate a simple brain circuit. With MIT's brain chip, the simulation is faster than the biological system itself."

cancel ×

220 comments

The Interface will be a problem. (4, Insightful)

Robert Zenz (1680268) | more than 2 years ago | (#38071660)

The problem is not providing such components, nor get them to work like the original nor getting it into your head. The real problem I see is interfacing with the rest of the brain.

Because, let's face it, that's something every coder knows: Interfacing, working and supporting legacy systems just sucks.

Re:The Interface will be a problem. (3, Insightful)

Eternauta3k (680157) | more than 2 years ago | (#38071682)

Not just with the brain, but also with itself. I heard the brain is ridiculously well interconnected.

Re:The Interface will be a problem. (1)

Anonymous Coward | more than 2 years ago | (#38071870)

"ridiculously well interconnected"
I've met some people where this is truely not the case.

Besides, why interface with the brain, why not just replace it?

Re:The Interface will be a problem. (1)

madmayr (1969930) | more than 2 years ago | (#38072268)

Besides, why interface with the brain, why not just replace it?

because of cyberbrain sclerosis, duh!

Re:The Interface will be a problem. (0)

Anonymous Coward | more than 2 years ago | (#38072526)

Besides, why interface with the brain, why not just replace it?

Because you want to transfer the personality to the new brain. Well, OK, for some people you'd probably rather not. :-)

Well it's obvious (5, Funny)

The Creator (4611) | more than 2 years ago | (#38071684)

Due to their incompatibility with newer systems, meat bags are now obsolete.

Re:Well it's obvious (3, Funny)

mikael_j (106439) | more than 2 years ago | (#38071710)

I'm sure someone will build an interface for it, and then there will be an open source driver within days.

If not that then let's at least hope our robotic overlords have it in their perfectly synchronized hearts to backport some of the major features...

Re:Well it's obvious (0, Funny)

Anonymous Coward | more than 2 years ago | (#38072098)

I'm sure someone will build an interface for it, and then there will be an open source driver within days.

If not that then let's at least hope our robotic overlords have it in their perfectly synchronized hearts to backport some of the major features...

Will it run Linux?

--

Granny who survives 3 days on soda and cookies in wrecked car announces that she plans to live on that from now on.

Re:Well it's obvious (0)

mikael_j (106439) | more than 2 years ago | (#38072380)

That's a good question, but a more pressing one is whether the software will be GPL 3 and will talking to other people be considered distributing it? Because then you might end up having to open source all your thoughts.

We are the borg, this program is free software: you can redistribute it and/or modify it under the term of the GNU General Public License as published by...

Re:The Interface will be a problem. (2)

somersault (912633) | more than 2 years ago | (#38071730)

I think getting them to "work like the original" is the problem actually. That part covers interfacing all by itself. The brain is very highly interconnected in 3D.. and we don't have great 3D chip fabrication yet.

Re:The Interface will be a problem. (2)

Smallpond (221300) | more than 2 years ago | (#38072402)

You don't need 3D chip fabrication to do 3D interconnects. The Connection Machine processors were interconnected in 8 dimensions, IIRC. Each node had 8 connections to its neighbors in a hypercube.

I have my doubts (3, Insightful)

Anonymous Coward | more than 2 years ago | (#38071744)

get them to work like the original

Is this really something that we could do in the foreseeable future ? My understanding is that the brain programs itself (or we program it if you like) during the first years of our lives (5 to 7) for the most part. An empty new 'brain part' would act just like some parts of the brain act after a stroke I suspect, meaning that it'll take years and years to (re)train it.

Similarly, children that grew up with animals alone, with little or no interaction with other humans (there were some cases) are never able to learn to speak fluently, because that part of the brain never fully develops (ie. is never programmed).

AFAIK we don't know enough about how the brain works to pre-program such components and it would need to be strongly tuned to the destination brain, otherwise it won't work very well or at all. We know about the lower-level stuff (neurons, synapses) and some things about the higher-level (regions and general functions), but not much in between (though, I'm not a specialist).

Even so, I can see some medical uses for this, for people with disabilities. Though nothing like what you see in 'Ghost in the Shell'.

Re:I have my doubts (2)

Ceriel Nosforit (682174) | more than 2 years ago | (#38071942)

Creating an artificial human brain is too ethically loaded to even be considered in university research. They are more likely to try to get it to play a flight simulator since that's what someone did with a rat brain and they could compare their results, making for interesting data.

Slashdot did however already welcome the flying rat overlords.

Re:I have my doubts (1)

inasity_rules (1110095) | more than 2 years ago | (#38071984)

Ethically loaded? How? I don't see how the brain would be suffering? Or are they worried about skynet?

Re:I have my doubts (5, Interesting)

jpapon (1877296) | more than 2 years ago | (#38072106)

You don't see how it's ethically loaded? Really?

Would the artificial brain have rights? If you wiped its artificial neurons, would it be murder? If you give it control of a physical robot arm and it hurt someone, how and to what extent could you "punish" it? The ethical questions are virtually endless when you start to play "god". I would think that would be obvious.

Re:I have my doubts (1)

inasity_rules (1110095) | more than 2 years ago | (#38072160)

Ethically controversial more than loaded. It is your creation, why should you not have the right to wipe it?

Re:I have my doubts (4, Insightful)

adam.dorsey (957024) | more than 2 years ago | (#38072224)

My (hypothetical) baby is my (and my fiancee's) creation, why should I not have the right to "wipe" it?

Re:I have my doubts (1)

inasity_rules (1110095) | more than 2 years ago | (#38072322)

A hypothetical baby is created through normal (hopefully) natural biological process. Any AI is created through application of intelligence. Thus I don't find the analogy sufficiently good to base a decision on. It is unethical to kill the baby, but that does not imply it is unethical to wipe the AI. Though, why would you?

Re:I have my doubts (3, Interesting)

amck (34780) | more than 2 years ago | (#38072702)

How about: does it suffer?
Does creating a "human" inside a device where it can presumably sense, but have no limbs or autonomy consitute torture? Can you turn it off?

Why is "being natural" a defining answer to these questions?

Re:I have my doubts (1)

inasity_rules (1110095) | more than 2 years ago | (#38072954)

How does it know what it is missing?

No Human is owned - that would be a violation of rights. But a computer for example (no matter how complex) is hardware which can be owned. Its software is merely a state of that machine which can also be owned. If I want to change the state of my software on my hardware, how can you say I'm ethically wrong? An AI I make is mine in a way no other intelligence can be. I can not own a person, neither can I own the state of their brain. But the state of the conditions inside hardware that I own, I do. I defined (at least initially) those conditions.

Re:I have my doubts (1)

jpapon (1877296) | more than 2 years ago | (#38072712)

A hypothetical baby is created through normal (hopefully) natural biological process. Any AI is created through application of intelligence.

I would argue that "application of intelligence" is a "natural biological process". We (and our brains) are, after all, creations of nature.

I would also point out that a baby created through in vitro fertilization is also, by definition, an "application of intelligence", and yet should also be treated as equal to a "natural" baby.

Re:I have my doubts (4, Insightful)

Dr_Barnowl (709838) | more than 2 years ago | (#38072850)

Application of intelligence is a natural biological process too, since the mind is running in a biochemical substrate (until the AI is working..)

You're arguably more responsible for the AI than you are for the baby - it's possible to produce a baby without understanding what you are doing. You don't make an AI accidentally on a drunken prom date.

The baby isn't even sentient until it reaches a certain level of development.

So why do we value the child over the computer? Because we are biased towards humans? I'm not saying this is wrong, just saying it's not defensible from the purely intellectual point of view - if they are both sentient and have an imperative to survive, defending the destruction of the artificial sentience because it's easy and free of consequence is in the same ball park as shooting indigenous tribesmen because "they're only darkies".

Re:I have my doubts (5, Funny)

Anonymous Coward | more than 2 years ago | (#38072346)

You'll wipe your baby way more often than you'd want to.

Re:I have my doubts (1)

morgauxo (974071) | more than 2 years ago | (#38072844)

Many people believe you do have that right.

Re:I have my doubts (0)

Anonymous Coward | more than 2 years ago | (#38072266)

Because that would be evil, you sociopathic megalomaniac. If it is a person self aware and with feelings then whether it is, artificially created or not, it/she/he has rights, or at least the same rights as everyone else. Your augment would allow parents to murder their children without punishment, I understand however that this argument is popular with some religions as a justification for the obviously evil acts of their gods in their legends.

Re:I have my doubts (-1)

Anonymous Coward | more than 2 years ago | (#38072244)

I would think that would be obvious

An artificial brain has no more rights than my cell phone and you're a moron. That's the obvious answer.

Re:I have my doubts (1)

19thNervousBreakdown (768619) | more than 2 years ago | (#38072408)

Aside from my belief that there's nothing supernatural about the human brain and that consciousness is just an artifact of being sufficiently complex to host a theory of mind, what would you do to someone who thought they had the right to kill you at any time, for any reason?

Re:I have my doubts (1)

jpapon (1877296) | more than 2 years ago | (#38072726)

I know the "I think that would be obvious" was snarky, but come on.

Re:I have my doubts (1)

jpapon (1877296) | more than 2 years ago | (#38072090)

The trick is that while it may take a while to train it, you only have to do it once. Then you can simply copy it as many times as you want.

Also, training would be significantly faster than in an actual human brain, since the connections are faster and you can simply train it using recorded data as input. No need to have it "actually" go through the teaching scenarios.

Re:The Interface will be a problem. (5, Interesting)

ledow (319597) | more than 2 years ago | (#38071764)

I think the REAL problem is that even the smallest brains have several billion neurons, with each having 10's of thousands of connections to other neurons. This chip simulates ONE such connection.

That's a PCB-routing problem that you REALLY don't want, and way outside the scale of anything that we build (it's like every computer on the planet having 10,000 direct Ethernet connections to nearby computers - no switches, hubs, routers, etc. in order to simulate something approaching a small mouse's brain - not only a cabling and routing nightmare but where the hell do you plug it all in?). Not only that, by a real brain learns by breaking and creating connections all the time.

The analog nature of the neuron isn't really the key to making "artificial brains" - the problem is simply scale. We will never be able to produce enough of these chips and tie them together well enough to produce anything conventionally interesting (and certainly nothing that we could actually analyse any better than the brain of any other species). If we did, it would be unmanageably unprogrammable and unpredictable. If it did anything interesting on its own, we'd never understand how or why it did that.

And I think the claim that they know EVERYTHING about how a neuron works (at least one part of it) is optimistic at best.

Re:The Interface will be a problem. (5, Insightful)

Narcocide (102829) | more than 2 years ago | (#38071792)

I agree with everything about this statement except the word "never."

Never is a pretty bold word. It puts you in a pretty gutsy mindset; one that isn't entirely productive to rational scientific analysis. The word "never" is pretty commonly seen in the company of "famous last words."

Re:The Interface will be a problem. (1)

FBeans (2201802) | more than 2 years ago | (#38072136)

I agree with everything about this statement except the word "never."

Never is a pretty bold word. It puts you in a pretty gutsy mindset; one that isn't entirely productive to rational scientific analysis. The word "never" is pretty commonly seen in the company of "famous last words."

I've never even heard anyone's famous last words.

Re:The Interface will be a problem. (1)

Anonymous Coward | more than 2 years ago | (#38071932)

Linking computers by Ethernet is hard because humans have pre-existing walls and furniture to route around. If you design and fab the entire system yourself you can wire the connections in an orderly fashion, e.g. via a convenient wall plug in your house.

Of course fabbing and connecting billions of neurons (which is more complex than merely billions of transistors) is no easy feat. Usually large scale processes are distributed. Biological stuff does it by being self-replicating. Although it's not completely inconceivable that we could come up with nano-scale units capable of growing electrical connections to other units.

Re:The Interface will be a problem. (2)

six025 (714064) | more than 2 years ago | (#38072006)

If we did, it would be unmanageably unprogrammable and unpredictable.

Should we just get it over with now, and call her EVE? ;-)

Peace,
Andy.

Never??? (2, Insightful)

mangu (126918) | more than 2 years ago | (#38072028)

The analog nature of the neuron isn't really the key to making "artificial brains" - the problem is simply scale.

Agreed.

We will never be able to produce enough of these chips and tie them together well enough to produce anything conventionally interesting

Shall we cue here all the "never" predictions of the last century? By the year 1900 there were lots of experts predicting we would never have flying machines, by 1950 experts were predicting the whole world would never need more than a dozen computers.

Moore's law, or should we say Moore's phenomenon, has been showing how much electronic devices scale in the long run.

Re:The Interface will be a problem. (2)

vipw (228) | more than 2 years ago | (#38072038)

They state that it takes 400 transistors. Intel fabs a 2 billion transistor chip. I don't think that really means that 5 million of these artificial neurons could be put in one die, but pretty I'm sure that they aren't planning to put millions of chips onto a board.

With wafer-scale integration, and some long range signal propagation to emulate 3d, there's reason to think that fairly large systems can be emulated.

Re:The Interface will be a problem. (1)

Anonymous Coward | more than 2 years ago | (#38072060)

The only reason this scale problem exists is because the brain is 3D while current chips are 2D. There is no routing problem in 3D. IBM and Intel have been developing 3D chip technology for a few years. 3D chips are still in research mode, because it turns out we've been extraordinarily good at scaling the 2D approach. There are solvable manufacturing problems yet to be addressed, and there are heat issues with the 3D approach. They are exploring 3D because they foresee a day where 2D won't cut it anymore, so they have incentive to solve the 3D mfg. problem.

Heat is another story. Remapping current chip designs into a 3D topology would increase the heat density, and require some new heat abatement technology. IBM has been exploring liquid cooling via micro tubes that are routed through the structure. (Anyone think blood vessels?) So this too is solvable, but still very much in the early research stage. But this is all predicated on mapping existing chip designs into 3D. A neural net design like MITs, is not at all like existing chip designs, so I don't know what the heat characteristics are. They could be hotter, or just as likely they could be cooler.

Re:The Interface will be a problem. (1)

Kjella (173770) | more than 2 years ago | (#38072238)

But we don't have to build our side of the system like that, we only need enough neuron simulators on the surface, run them through an A/D circuit, do it our way then D/A it back into the brain. I'm pretty sure neurons like everything else has a resolution limit.

Re:The Interface will be a problem. (5, Interesting)

ultranova (717540) | more than 2 years ago | (#38072264)

That's a PCB-routing problem that you REALLY don't want, and way outside the scale of anything that we build (it's like every computer on the planet having 10,000 direct Ethernet connections to nearby computers - no switches, hubs, routers, etc. in order to simulate something approaching a small mouse's brain - not only a cabling and routing nightmare but where the hell do you plug it all in?). Not only that, by a real brain learns by breaking and creating connections all the time.

A single neuron-neuron connection has very low bandwidth, in effect transferring a single number (activation level) a few hundred times a second. Even if timing is important, you can simply accompany the level with a timestamp. A single 100 Mbs Ethernet connection is easily able to handle all those 10 000 connections.

Also, most of those 10 000 connections are to nearby neurons, presumably because long-distance communication involves the same latency and energy penalties in the brains as it does anywhere else. There are efficient methods to auto-cluster a P2P network so as to minimize total length of connections, for example Freenet does this; so, you could, in theory, run a distributed neural simulator even on standard Internet technology. In fact, I suspect that it could be possible to achieve human-level or higher artificial intelligence with existant computer power in this method right now.

So, who wants to start HAL@Home ?-)

Re:The Interface will be a problem. (0)

Anonymous Coward | more than 2 years ago | (#38072594)

Different creatures have different sized brains measured either by Weight in grams [washington.edu] or Number of neurons in the brain [wikipedia.org]

But as an animal increases in size, neuronal density decreases. Below a certain body size, neurons get larger, known as "fat neurons".

Re:The Interface will be a problem. (3, Interesting)

Anonymous Coward | more than 2 years ago | (#38072824)

I think the REAL problem is that even the smallest brains have several billion neurons, with each having 10's of thousands of connections to other neurons. This chip simulates ONE such connection.

I give it 30 years at most.

Let's say several=5. times 5000, (10 thousand dived by to so as to not double-count both ends) = 2,500 billion connection.Let's assume 400 transistors per connection as in this study that comes out to 10,000,000 billions transistors, not counting the possibility of time-multiplexed busses as mentioned in a comment below (as biological neurons are slow compared to transistors)

According to wikipedia [slashdot.org] a Xilinx Virtex 7 FPGA (More similar to an array of neurons than a CPU) has 6.8 billion transistors. This means we need 1,470,588 times more transistors. That's less than 2^20.5, or 20.5 doublings, which according to Moore's law would be about 30 years or so.

So even without multiprocessing, simplification of this design, and other simple improvements, this will be possible to put on some sort of chip in 30 years time.

Never say never. 2042 will be the year of the brain in the desktop! :)

Re:The Interface will be a problem. (3, Informative)

agrif (960591) | more than 2 years ago | (#38072166)

"Your species is obsolete," the ghost comments smugly. "Inappropriately adapted to artificial realities. Poorly optimized circuitry, excessively complex low-bandwidth sensors, messily global variables..."

Accelerando [jus.uio.no] , by Charles Stross

Re:The Interface will be a problem. (1)

Robert Zenz (1680268) | more than 2 years ago | (#38072286)

Those stories sound very interesting, thank you very much.

Re:The Interface will be a problem. (1)

Hentes (2461350) | more than 2 years ago | (#38072218)

The neural network is much more than just the brain. Repairing nerves of paralised people is a much easier task than interfacing with the brain.

Re:The Interface will be a problem. (1)

morgauxo (974071) | more than 2 years ago | (#38072810)

True. There are already lot's of people working on that though!

Was not expecting that.. (3, Funny)

somersault (912633) | more than 2 years ago | (#38071664)

With MIT's brain chip, the simulation is faster than the biological system itself.

Uh-oh.

Re:Was not expecting that.. (2)

Narcocide (102829) | more than 2 years ago | (#38071718)

Seems like you were thinking just I was thinking; Great, just enough time to enjoy a decade or two of flying cars built-and-designed entirely by machines before the machines realize we're all bad drivers and must be permanently restrained for our own well being.

That just means it's wrong (1)

Anonymous Coward | more than 2 years ago | (#38071852)

Relax - that just means that either they implemented a discrete time dynamical system (i.e., the chip's simulation has a discrete clock, which the brain doesn't), or the time constants on their system (ion channel flows, etc) are wrong, or both.

Lots of people have accurately simulated single neurons with hardware components before (see https://secure.wikimedia.org/wikipedia/en/wiki/Hodgkin%E2%80%93Huxley_model [wikimedia.org] for an actual analogue circuit from the 50s) but figuring out the a priori wiring & weights connecting the neurons is far more difficult & they've done nothing to address this. Just consider the fact that our individual neurons aren't that different from other mammals' neurons, but we're much smarter than other mammals. You can't just wire together a million of these chips and have it do anything interesting. "It's the network, stupid."

But what about efficiency? (4, Interesting)

Pegasus (13291) | more than 2 years ago | (#38071878)

It may be faster, but what about performance per watt? You know, the whole brain does everything on only 40-50 watts. How does this MIT product compare to brains in this area?

Re:But what about efficiency? (0)

Anonymous Coward | more than 2 years ago | (#38072188)

Where is that info from ? 50 watts sounds too much.
I mean you have to eat 1000kcal per day just to keep your brain working then
Google says human need ~ 1000 cal per day for 80 kgs of weight

Re:But what about efficiency? (1)

Anonymous Coward | more than 2 years ago | (#38072378)

a food "Calorie" is a kcal.

Re:But what about efficiency? (2)

chocapix (1595613) | more than 2 years ago | (#38072462)

Wow, I never thought about it that way. A human brain consumes less power than a modern CPU (say, 100W).

Plus, the brain does its own glucose burning and that's counted in the 50W. To compare fairly, you'd need to take into account the PSU efficiency, electrical grid losses and power plant efficiency in the CPU power. If we say 50% efficiency overall, that means 200W for the CPU.

Just wow.

the 1960s called (0)

Anonymous Coward | more than 2 years ago | (#38071668)

They want their Neuristor back.

Re:the 1960s called (1)

dtmos (447842) | more than 2 years ago | (#38071900)

Well, at least they can keep their nuvistors [wikipedia.org] -- although it would be an interesting (if expensive) technical challenge to redo the project with the last gasp of vacuum tube technology.

As it is written so shall it be done. (3, Interesting)

Narcocide (102829) | more than 2 years ago | (#38071670)

Have you ever stood and stared at it, marveled at its beauty, its genius? Billions of people just living out their lives, oblivious. Did you know that the first Matrix was designed to be a perfect human world, where none suffered, where everyone would be happy? It was a disaster. No one would accept the program, entire crops were lost. Some believed we lacked the programming language to describe your perfect world, but I believe that, as a species, human beings define their reality through misery and suffering. The perfect world was a dream that your primitive cerebrum kept trying to wake up from. Which is why the Matrix was redesigned to this, the peak of your civilization. I say your civilization, because as soon as we started thinking for you it really became our civilization, which is of course what this is all about. Evolution, Morpheus, evolution. Like the dinosaur. Look out that window. You've had your time. The future is our world, Morpheus. The future is our time.

-- Agent Smith (The Matrix)

Next feat: improvised grenade fishing? (1)

ysth (1368415) | more than 2 years ago | (#38071678)

"Simon, what day is this?" "God, I don't know. My damned chip is f***ed up beyond repair. It's turning to snot inside my head. It's driving me crazy."

Better long-term goal: replace brains with these (2, Interesting)

Anonymous Coward | more than 2 years ago | (#38071720)

"You cannot step twice in the same river". That means we are constantly changing. Over sufficiently long period, old person has all but died while new one has gradually taken over.

Using that reasoning, we could replace biological brains bit-by-bit over long period of time, without killing the subject. In the end, if successful, the person has no more biological brains left. He'd have all digital brains. Backups, copies, performance tuning, higher clock rates, more memory, more devices, ... and immortality.

Re:Better long-term goal: replace brains with thes (1)

Narcocide (102829) | more than 2 years ago | (#38071734)

I like that long term goal. I'd also like the nerve tissue in the rest of my body replaced with this stuff too. Wired reflexes FTW!

Re:Better long-term goal: replace brains with thes (1)

Anonymous Coward | more than 2 years ago | (#38072214)

Where we're going, we don't need breathable oxygen and metabolized chemical energy anymore. Time isn't much of a problem anymore either. Maybe we can finally get off of this ol' rock and start doing space exploration for real.

No need to force any meatbags to change their ways either, they can continue to rest on their laurels while we get just enough to get started and then leave them behind. The only exception is that we might occasionally keep in touch for old time's sake and let them know if we find anything interesting, but that's about it.

Re:Better long-term goal: replace brains with thes (1)

mr_gorkajuice (1347383) | more than 2 years ago | (#38072300)

Makes me wonder. I assume the immortal machine would think it WAS the subject. But would the subject think he was the machine?

Still a long way to go ... (0)

Anonymous Coward | more than 2 years ago | (#38071728)

"..the silicon chip can simulate the activity of a single brain synapse..."

From a synapse you have to go to a neuron and from there to neural circuit, with
new problems at every stage.
And it is also a question if they really now about "each and every ionic process".

Re:Still a long way to go ... (1)

Niedi (1335165) | more than 2 years ago | (#38071814)

Plus: A "brain synapse" That's like saying this models a "computer chip". Great. Which type? There's a huge load of different types out there, each working a bit different. And imagine their joy if someone finds a new involved protein X which renders this chip's design inaccurate and thereby incomplete. It's a nice toy but nothing more atm...

Plasticity (1)

whoisisis (1225718) | more than 2 years ago | (#38071736)

I wonder if this chip can do plasticity and learning just as a real brain.

One thing is to hardwire a neural network, another is to mimic the brain.
The brain constantly rewires itself in different ways to learn.

Re:Plasticity (1)

whoisisis (1225718) | more than 2 years ago | (#38071746)

or rather how it does so

My next startup idea (3, Funny)

simoncpu was here (1601629) | more than 2 years ago | (#38071740)

1. Build a farm of brain chips
2. Expose the brain chips via an API
3. Build a cloud service for brain chips
4. Market as Brain Power on Demand(tm)!
5. ???
6. Profit!!

I love it! I'm totally on board with this! (1)

Narcocide (102829) | more than 2 years ago | (#38071770)

Ok, so first we build this giant super-fast, high-capacity infant brain and then expose it via a documented API to the public internet so that we can tap into the practically infinite supply of free, unmoderated, user-generated content in order to train it to learn and interact with humans.

There is *no* way that during this process it could go insane and decide to try to destroy itself and/or the world. Completely safe. Yep. Make sure we tell that to the shareholders.

Re:I love it! I'm totally on board with this! (1)

admiralranga (2007120) | more than 2 years ago | (#38071874)

or produce a replica of /b/?

Re:I love it! I'm totally on board with this! (1)

Narcocide (102829) | more than 2 years ago | (#38071890)

Are you suggesting that is in some way fundamentally different from attempting to self-annihilate?

Re:My next startup idea (1)

FBeans (2201802) | more than 2 years ago | (#38072146)

Some sort of Brain Cloud? Are you sure your brain is not being clouded? Also the phrase "brain chips" makes me think of food, a strange canabal food.

and how... (0)

Anonymous Coward | more than 2 years ago | (#38071826)

...do they model free will?

Re:and how... (1)

Narcocide (102829) | more than 2 years ago | (#38071844)

That is going to be the fun part. They get it for free with the chip design whether they ask for it or not.

So to model analogue neurons... (1)

Viol8 (599362) | more than 2 years ago | (#38071830)

... they used analogue electronics.

And the radical new technology is what? This was done in the 1960s. Sure, there may be a bit more accuracy and finesse with this version, but really, cutting edge this is not.

Re:So to model analogue neurons... (5, Insightful)

Anonymous Coward | more than 2 years ago | (#38072046)

I think you have to credit MIT researchers for knowing better where the cutting edge is than you, and the writers of the article for including the 1960s in this paragraph:

'Previously, researchers had built circuits that could simulate the firing of an action potential, but not all of the circumstances that produce the potentials. “If you really want to mimic brain function realistically, you have to do more than just spiking. You have to capture the intracellular processes that are ion channel-based,” Poon says.'

More than just spiking; from my AI lectures years ago I recall that the McCulloch-Pitts neuron model of the was a spiking model (excitatory inputs, inhibitory inputs, thresholds) etc.

Re:So to model analogue neurons... (0)

Anonymous Coward | more than 2 years ago | (#38072158)

A decade ago (when I was just getting my bachelor's) I considered doing grad research in neural nets, and was disappointed that all the activity I saw was digital. Since then, I haven't kept up with the field, but unfortunately they may in fact finally be getting back to using analog. But you shouldn't be too surprised by this.

Digital systems can be copied, perfectly. Which is great for running repeatable experiments. An analog system can never be perfectly replicated. Even if we had a full analog model of the brain, we might be stuck right back where we are now with real ones. You'd have to teach it everything from the ground up, like a child. And then you'd probably never actually understand it. You also wouldn't be able to simply copy the intelligence into a new brain. This makes it hard to do research with, so I can see why analog has been avoided.

Theoretically, with enough bits of resolution a digital brain could model an analog one. In practice however, you probably could never practically build a accurate digital neural model of the brain. We simply don't know how much precision is actually required, and it may require so much precision that the size of the circuitry becomes impossible to build without running so slow as to be useless.

some math... (1)

Anonymous Coward | more than 2 years ago | (#38071848)

1 synapse = 400 transistors.
current gpus = 3,000,000,000 transistors.
so we currently can achive 7,500,000 synapses, assuming that there is absolutely no overhead in having more synapse work togheter...
and those are just the synapse, without the neurons...

human brain: 10^11 (one hundred billion) neurons, and each one has on average 7,000 synaptic connections to other neurons...

we're still far, but interesting work anyway.

Plus (1)

maroberts (15852) | more than 2 years ago | (#38072034)

Current GPU systems consume large wattages, so if you have brain augmentation in this way you are likely to need your head connected to a PSU connected in turn to a mains supply.

Re:Plus (1)

vipw (228) | more than 2 years ago | (#38072068)

They consume a lot of power because they have a fast clock rate. A neural net shouldn't need a high clock rate.

It was 2011... (2)

bassdrop (693216) | more than 2 years ago | (#38071850)

...and war was beginning.

Transistors are not digital (3, Interesting)

Ceriel Nosforit (682174) | more than 2 years ago | (#38071906)

The way I remember it is that a transistor stops a much larger current from passing through until a signal is put on the gate in the middle. Then the current that passes through is in proportion to the signal strength.

The circuit becomes digital when we decide that only very small and very large voltages counts as 0s and 1s.

Re:Transistors are not digital (2)

multatuli (740516) | more than 2 years ago | (#38072114)

Transistors are analog. Transistor-transistor logic is digital.

Re:Transistors are not digital (1)

Anonymous Coward | more than 2 years ago | (#38072664)

Transistors are analog. Transistor-transistor logic is digital.

Neurons are analog. What about neuron-neuron-logic?

Re:Transistors are not digital (2)

multatuli (740516) | more than 2 years ago | (#38072720)

Transistors are analog. Transistor-transistor logic is digital.

Neurons are analog. What about neuron-neuron-logic?

Some seem to expect it quantum-mechanical or even completely non-deterministic.
Some are living up to that expectancy ;-)

Re:Transistors are not digital (0)

Anonymous Coward | more than 2 years ago | (#38072288)

This, this so much.
Computers aren't digital, we just make them digital through circuit design.

If we wanted to, we could quite easily use ternary computing, or quaternary.
All we'd need to do is redesign the circuits to deal with the differing voltages, and produce better-than-average power supplies that aren't made of cheese strings and sticks, since voltages across boards vary between that 0 and 1 quite often, but not enough to break the circuits most of the time. (unless faulty hardware or rogue radiation decides to attack your stuff)
With binary backwards compatibility mode, you'd not even need to modify the programs that run on it, besides the main kernel, maybe.

Laziness and cheapness prevents this, though.
Binary computers are terrible, just terrible.
Balanced Ternary is a much better system, and it isn't that much harder to pull off than binary is.
The math and logic is pretty easy too. As easy as it gets when dealing with alternate base math.

Re:Transistors are not digital (0)

Anonymous Coward | more than 2 years ago | (#38072508)

Isn't it also because operating a transistor at saturation or cut-off for most of the time wastes far less energy than when it is operated in its linear region?

Computer Chip (1)

Lord Lode (1290856) | more than 2 years ago | (#38071914)

Very cool and all, but why does the summary call this a "Computer Chip?

Re:Computer Chip (1)

amoeba1911 (978485) | more than 2 years ago | (#38072190)

Perhaps because it's still a bunch of transistors.

Re:Computer Chip (1)

Lord Lode (1290856) | more than 2 years ago | (#38072488)

But it's only 400 of them! Even the Intel 4004 had more.

Why try to replicate a synapse? (1)

Anonymous Coward | more than 2 years ago | (#38071998)

Sounds a bit cargo-cult to me. "This is the way a human brain works, therefore if we want AI, we have to replicate that". Nope. No reason a purely digital AI couldnt exist, without having to simulate the analogue nature of the human brain first. Without that legacy stuff, it'd probably be hundreds of times more efficient, too.

400 'tors a bit overkill (0)

Anonymous Coward | more than 2 years ago | (#38072016)

Another university already came up with a way to put "analogue transitors" or "solid state valves" or whatever you'd like to call them, on a chip die. So why the need for 400 binary ones? Because MIT didn't come up with the idea? Is that it?

SPOnPGE (-1)

Anonymous Coward | more than 2 years ago | (#38072072)

paranoid conspiracy Since we made the HOOBYIST DILEETANTE and shower. For

Putting many together (2)

rust627 (1072296) | more than 2 years ago | (#38072240)

so we build a synapse, and then link to more, and more, and before you know it we have a "brain",

We could call it a Positronic Brain, sounds catchy, and marketable.

And we really should enforce some rules to prevent a 'skynet' occurrence, not too many rules though,
I'm sure we could distill the logic down to three simple rules ............

They better have insurance... (1)

LoRdTAW (99712) | more than 2 years ago | (#38072246)

A hundred bucks says a woman and her son, black dude and a juice head will break into the lab, blow it up and throw the chip into a vat of molten metal.

All that work for nothing.

Synapse firing event is not pure analog (3, Insightful)

wdef (1050680) | more than 2 years ago | (#38072248)

I might be out of date, but: the event itself requires the neuron's action potential to reach a threshold, then the synapse fires. It either fires or it does not. On or off. But the process of reaching the firing threshold is analog, since the physical geometry of the neuron and of its afferent neural feeds (inputs) determines at what point the neuron will fire. Neurotransmitter quantities in the synapse are also modifiable though eg by drugs and natural up/down regulation of receptors, enzymes or re-uptake inhibition. So a neuron is an analog computer having output with various amplitudes of on/off.

Re:Synapse firing event is not pure analog (0)

Anonymous Coward | more than 2 years ago | (#38072782)

This is sort-of what I was going to post. Everything I've read about neurons suggests that they work on a mostly "digital" principle and do not really propagate analog information. The analog side of things is that overall brain function changes as chemistry affects the the sensitivity of neurons.

Still a long, LONG way to go... (3, Insightful)

Pollux (102520) | more than 2 years ago | (#38072330)

MIT’s chip — all 400 transistors (pictured below) — is dedicated to modeling every biological caveat in a single synapse. “We now have a way to capture each and every ionic process that’s going on in a neuron,” says Chi-Sang Poon, an MIT researcher who worked on the project.

Just because you finally can recognize the letters of the alphabet doesn't mean you can speak the language.

Re:Still a long, LONG way to go... (0)

Anonymous Coward | more than 2 years ago | (#38072442)

What kind of chips are they? Can it be used in the usb drive [hkcolordigital.com] ?

Re:Still a long, LONG way to go... (5, Informative)

leptogenesis (1305483) | more than 2 years ago | (#38072644)

Mod parent up. The linked article (and the MIT press release) are misleading. The closest thing I can find to a peer-reviewed publication by Poon has an abstract is here (no, I can't find anything throught the official EMBC channels--what a disgustingly closed conference):

https://embs.papercept.net/conferences/scripts/abstract.pl?ConfID=14&Number=2328 [papercept.net]

And there's some background on Poon's goals here:

http://www.frontiersin.org/Journal/FullText.aspx?ART_DOI=10.3389/fnins.2011.00108&name=neuromorphic_engineering [frontiersin.org]

The goals seem to me to be about studying specific theories about information propagation across synapses as well as studying brain-computer interfaces. They never mention building a model of the entire visual system or any serious artificial intelligence. We have only the vaguest theories about how the visual system works beyond V1, and essentially no idea what properties of the synapse are important to make it happen.

About two years ago, while I was still doing my undergraduate research in neural modeling, I recall that the particular theory they're talking about--spike-timing dependent plasticity [wikipedia.org] --was quite controversial. It might have been simply an artifact of the way the NMDA receptor worked. Nobody seemed to have any cohesive theory for why it would lead to intelligence or learning, other than vague references to the well-established Hebb rule.

Nor is it anything new. Remember this [slashdot.org] story from ages ago? Remember how well that returned on its promises of creating a real brain? That was spike-timing dependent plasticity as well, and unsurprisingly it never did anything resembling thought.

Slashdot, can we please stop posting stories about people trying to make brains on chips and post stories about real AI research?

Analog = Unique (1)

Warwick Allison (209388) | more than 2 years ago | (#38072578)

If it's analog, then it's behaviour is unique per chip, and so anything you build from them will be subtly unique. So "software" would behave differently depending on the unit it was running on. You thought 4 or 5 versions of Linux was tricky to support...

Re:Analog = Unique (1)

multatuli (740516) | more than 2 years ago | (#38072746)

If it's analog, then it's behaviour is unique per chip, and so anything you build from them will be subtly unique. So "software" would behave differently depending on the unit it was running on. You thought 4 or 5 versions of Linux was tricky to support...

That explains why I always seem to hear things on my radio that other people don't ;-)

Data smoking pot (1)

morgauxo (974071) | more than 2 years ago | (#38072866)

For those who RTFA (or at least clicked) anybody else see his eyes in that picture and wonder if Data had been smoking pot?
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...