Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

IBM Creates Custom-Made Brain-Like Chip

samzenpus posted about 2 months ago | from the it's-alive dept.

IBM 105

An anonymous reader writes In a paper published Thursday in Science, IBM describes its creation of a brain-like chip called TrueNorth. It has "4,096 processor cores, and it mimics one million human neurons and 256 million synapses, two of the fundamental biological building blocks that make up the human brain." What's the difference between TrueNorth and traditional processing units? Apparently, TrueNorth encodes data "as patterns of pulses". Already, TrueNorth has a proven 80% accuracy in image recognition with a power consumption efficiency rate beating traditional processing units. Don't look for brain-like chips in the open market any time soon, though. TrueNorth is part of a DARPA research effort that may or may not translate into significant changes in commercial chip architecture and function.

Sorry! There are no comments related to the filter you selected.

So we've created George W Bush? (1)

Anonymous Coward | about 2 months ago | (#47629121)

I think this chip could govern the US more effectively than our current Congress.

Re:So we've created George W Bush? (0)

Anonymous Coward | about 2 months ago | (#47629143)

Certainly better than our current President.

Re:So we've created George W Bush? (1)

Anonymous Coward | about 2 months ago | (#47629339)

This chip would be overkill, an old 4004 would be sufficient.

Re:So we've created George W Bush? (1)

RabidReindeer (2625839) | about 2 months ago | (#47629729)

This chip would be overkill, an old 4004 would be sufficient.

A block of wood would be sufficient. A 4004 would be overkill.

Actually... (1)

voss (52565) | about 2 months ago | (#47629981)

A commodore 64 running NATO COMMANDER on a datasette could run the George W Bush thought process.

Re:Actually... (0)

Anonymous Coward | about 2 months ago | (#47630019)

Funny thing is Bush likely outscores YOU on an intelligence test.
SO that makes you the sludge left at the bottom of the coffee cup, no?

Re:Actually... (0)

Von Rex (114907) | about 2 months ago | (#47631255)

And if you believe that, you'd score lowest of all. What's that make you, the mold on the sludge on the bottom of the coffee?

Re:Actually... (0)

Anonymous Coward | about 2 months ago | (#47633695)

-100, grossly unclever.

Re:Actually... (0)

Anonymous Coward | about 2 months ago | (#47631149)

and you wouldn't have to power it on either

Re:So we've created George W Bush? (2)

tomhath (637240) | about 2 months ago | (#47630107)

But can it read from a teleprompter? If so it should qualify for a Nobel Peace Prize.

Re:So we've created George W Bush? (2)

blue9steel (2758287) | about 2 months ago | (#47630291)

Yes, using S.A.M. (Software Automatic Mouth) via the SID audio chip built in.

Re:So we've created George W Bush? (0)

Anonymous Coward | about 2 months ago | (#47630137)

God help us.

Re:So we've created George W Bush? (1)

ganjadude (952775) | about 2 months ago | (#47630305)

so could an 8008

Marketing BS (0)

Anonymous Coward | about 2 months ago | (#47629133)

Throwing more monkeys into the mix in the vague hope of getting Shakespeare out.

Re:Marketing BS (2)

mark-t (151149) | about 2 months ago | (#47630027)

Evolution got Shakespeare after throwing enough monkeys into the mix.... why can't we?

I'm hungry (0)

Anonymous Coward | about 2 months ago | (#47629137)

What does a brain-like potato chip taste like?

Republicans Hate College Education - and Obama (-1)

Anonymous Coward | about 2 months ago | (#47629147)

http://freebeacon.com/blog/do-nothing-millionaire-laments-that-millionaires-do-nothing/

"The University of California-Berkeley pays former Clinton Labor Secretary Robert Reich $240,000 a year to teach one class about the scourge of income inequality and attend Occupy Wall Street rallies. (He also makes up to $100,000 per speech.)

Reich, a millionaire, is very concerned about rich people who, unlike him, don't really deserve their fortunes. "What someone is paid has little or no relationship to what their work is worth to society," he wrote in a recent blog post, without any apparent irony.
Wonder what Reich thinks about Paul Krugman, who draws a $225,000 annual salary from the (publicly funded) City University of New York, and doesn't even have to teach a single class?"

---------

How about that you stinking leftist cowards! Bashing the rich night and day! Tax the rich, steal their money and give it to the poor oops I meant give it to the state where it will end up as birbes to the ruling class and to the pockets of the elite in congress and the crony corrptpocrats everywhere.

Reich makes 1/4 million a year teaching his socialists gobbldegook to the skulls full of mush, and goes on to advocate taxing the hateful rich people into the poor house. No one should be rich, ever, anywhere. Except for the *right* people of course, cause money is fun you know.

I hate you fucking socialists more and more every day. Fuck you all.

IBM and chips (4, Interesting)

jbolden (176878) | about 2 months ago | (#47629153)

It is getting hard to figure out where IBM is on chips. Arguably the 4 main chips experiencing investment are: x86, ARM, Z-Series processors and POWER series 2 of which are IBM. OTOH there is no roadmap for POWER beyond the current generation. I'd love to know is IBM getting more serious about CPUs or pulling back?

But do the chips even matter? (0)

Anonymous Coward | about 2 months ago | (#47629193)

"Chips" mattered in the 1970s and 1980s. "Networks" mattered in the 1990s and 2000s. But now we're in the 2010s. We have transcended "chips". We have transcended "networks". We are in the era of "neurons".

Re:But do the chips even matter? (0)

Anonymous Coward | about 2 months ago | (#47629269)

The only thing that matters to IBM is devastating the workforce to meet 2015 earnings per share targets. Nothing is sacred in this push.

There is POWER9 on the roadmaps (3, Interesting)

Henriok (6762) | about 2 months ago | (#47629523)

I agree i your initial statement, but that's pretty much as it has been for at least 15 years or so. POWER9 is on the roadmaps, and the next generation zArch too. And they are sitting there like proxy boxes with nothing much spced, like it has been for almost all previous generations of their predecessors. What I'm concerned with is the lack of public roadmap for what they are planning in the HPC and super computer space. We had the very public Blue Gene project that began in 2001 with four projects; C, L, P and Q, but since the Blue Gene/Q came to life a couple of years ago, I have no idea what they are planning. It'd be nice to have some clue here.. Why not something from the OpenPOWER Foundation; A P8 host processor with integrated GPU from nVidia, on chip networking from Mellanox and programmable accelerators from Altera. But I haven't seen anything in that direction.

Re: IBM and chips (0)

Anonymous Coward | about 2 months ago | (#47629819)

And you are working for IBM STG?

Re:IBM and chips (0)

Anonymous Coward | about 2 months ago | (#47630315)

It is getting hard to figure out where IBM is on chips. Arguably the 4 main chips experiencing investment are: x86, ARM, Z-Series processors and POWER series 2 of which are IBM. OTOH there is no roadmap for POWER beyond the current generation. I'd love to know is IBM getting more serious about CPUs or pulling back?

And you're working for IBM STG?

Re:IBM and chips (1)

jbolden (176878) | about 2 months ago | (#47633069)

No, I'm an IBM partner which means I can sell their stuff. I've never worked directly for them.

So, we are going to have artificial Brains Soon? (1)

rtoz (2530056) | about 2 months ago | (#47629163)

I am just curious to know whether this chip can lead to the development of artificial brains to be used by Humans in future? And, I couldn't understand why this chip will not available in the open market.

Re:So, we are going to have artificial Brains Soon (1)

K. S. Kyosuke (729550) | about 2 months ago | (#47629183)

Thanks to your capitalization, I've misread it as "development of artificial brains to be used by Hamas". (Yes, I need new glasses.)

Re:So, we are going to have artificial Brains Soon (0)

Anonymous Coward | about 2 months ago | (#47629659)

New glasses? What you really need is to turn off the television.

Re:So, we are going to have artificial Brains Soon (1)

Anonymous Coward | about 2 months ago | (#47629389)

I couldn't understand why this chip will not available in the open market.

The millitary boys want to keep their toys secret to prevent the enimies getting them.

Re:So, we are going to have artificial Brains Soon (1)

PolygamousRanchKid (1290638) | about 2 months ago | (#47629597)

. . . I wacky-parsed the title as: "IBM Creates Custom-Made Brain-Like Chimp.

. . . so just imagine where that thought train derailed me . . .

Re:So, we are going to have artificial Brains Soon (1)

gewalker (57809) | about 2 months ago | (#47630155)

Can't Purina use them in Zombie Chow? If so, I would rather feed that to the neighborhood zombies instead of my own gray matter?

Re:So, we are going to have artificial Brains Soon (1)

narcc (412956) | about 2 months ago | (#47632479)

I am just curious to know whether this chip can lead to the development of artificial brains to be used by Humans in future?

That's easy: No.

Kurzweil is the modern equivalent of a televangelist.

to save others googling (5, Interesting)

Anonymous Coward | about 2 months ago | (#47629199)

The number of neurons in the brain varies dramatically from species to species. One estimate (published in 1988) puts the human brain at about 100 billion (10^11) neurons and 100 trillion (10^14) synapses.

100 billion divided by 1 million = 100,000 of these chips to reach the human neuron count.
100 trillion divided by 256 million = 390,625 of these chips to reach human synapse count.

Assuming Moores Law for these chips with a doubling every 24 months to be conservative.
2 of these on a chip in 2016
4 of these on a chip in 2018
8 of these on a chip in 2020
16 of these on a chip in 2022
32 of these on a chip in 2024
64 of these on a chip in 2026
128 of these on a chip in 2028
256 of these on a chip in 2030
512 of these on a chip in 2032
1024 of these on a chip in 2034
2048 of these on a chip in 2036
4096 of these on a chip in 2038
8192 of these on a chip in 2040
16384 of these on a chip in 2042
32768 of these on a chip in 2044
65536 of these on a chip in 2046
131072 of these on a chip in 2048
262144 of these on a chip in 2050

So we could be seeing human brain capabilities on a chip by mid century. Quite possible we'd see similar capabilities built as a supercomputer 10-20 years before that. Don't flame for the wild assumptions I'm making here - i know there are a lot, this is just intended as some back of the envelope calculations.

Re: to save others googling (0)

Anonymous Coward | about 2 months ago | (#47629257)

If "googlescale" clustering is used you could see something like this by ~2025

Re:to save others googling (1)

Calinous (985536) | about 2 months ago | (#47629287)

Specific activities engage only part of a brain - so we probably only have to go go 10% or so. That cuts less than a decade though, so 2040 something

Re:to save others googling (0)

Anonymous Coward | about 2 months ago | (#47629343)

You know that's a myth, right?

Re:to save others googling (1)

Anonymous Coward | about 2 months ago | (#47629747)

It's true, there is a documentary about it in the cinemas right now.

Re:to save others googling (1)

mark-t (151149) | about 2 months ago | (#47631099)

Although we do really use most of our brain over a prolonged period, only a very small amount of it is used at any one time for cognitive activities. 10% may be a bit of a low estimate, but not by much.

Re:to save others googling (1)

K. S. Kyosuke (729550) | about 2 months ago | (#47629359)

Uh, you couldn't possibly be wronger.

Re:to save others googling (1)

LordWabbit2 (2440804) | about 2 months ago | (#47629391)

Looks like your grammar is not engaging any part of your brain

Re:to save others googling (0)

Anonymous Coward | about 2 months ago | (#47629509)

Pro tip: Most English-speaking people are confused about what "grammar" actually means.

Re:to save others googling (1)

Calinous (985536) | about 2 months ago | (#47629447)

http://en.wikipedia.org/wiki/V... [wikipedia.org]
http://en.wikipedia.org/wiki/C... [wikipedia.org]
http://en.wikipedia.org/wiki/C... [wikipedia.org]
http://en.wikipedia.org/wiki/C... [wikipedia.org] ... so, parts of the brain are specialized for specific activities.

Or you use all your brain for any activity you do, and you can't do two things like Napoleon: sit on a horse and look left ?

Re:to save others googling (1)

K. S. Kyosuke (729550) | about 2 months ago | (#47629473)

Of course we know that, that's how we found these areas work: if you damage a part of the brain, something vital stops working. The problem is, most of these areas are vital to people almost all of the time.

Re:to save others googling (2)

ShanghaiBill (739463) | about 2 months ago | (#47630087)

The problem is, most of these areas are vital to people almost all of the time.

But they are NOT vital to an AI computer. An intelligent computer should only have to emulate the cerebral cortex, not the entire brain.

Re:to save others googling (0)

Anonymous Coward | about 2 months ago | (#47630201)

The brain was fashioned by evolution, so there is likely vestigial components that can be eliminated. But I wonder if a cerebral cortex without say a limbic system, would operate much like our current computers: capable of complex calculation, but not really interested in doing so.

Re:to save others googling (0)

Anonymous Coward | about 2 months ago | (#47629385)

Why would anyone want a chip as stupid as the average person? We have seven billion and more of those already, and most of them need jobs.

What we need is something far more intelligent, which will take even longer that mid-century.

Re:to save others googling (0)

Anonymous Coward | about 2 months ago | (#47629439)

*than

AC cannot edit posts.

Re:to save others googling (0)

Anonymous Coward | about 2 months ago | (#47629467)

Because a chip costs less to employ than a person.
Because there are no HR costs, recruitment, retirement.
Chips can work 24/7, don't want breaks, won't join unions etc etc.

the list goes on. Plus if you can interface these chips correctly with regular chips then it would be like creating a human with an intel processor integrated into their brain.

Plus once you hit 2050 you can either call it intelligence doubling, or time taken to perform certain specific tasks halving every two years.

Re:to save others googling (0)

Anonymous Coward | about 2 months ago | (#47631373)

You think chips that are at least as smart as humans won't form a union?

Re:to save others googling (1)

Henriok (6762) | about 2 months ago | (#47629545)

There's no chance that Moores law can progress for 50 more years. Wouldn't each transistor be substantially smaller that atom nucleus by then? IF you don't mean a chip the size of a table, that is..

Re:to save others googling (1)

Anonymous Coward | about 2 months ago | (#47629879)

The math: The latest intel processors use transistors that are 22nm across. The width of a hydrogen atom, ~1.1 angstrom, is about 0.11nm, or 110 picometers across. Assuming the transistor size halves every two years(which, from the looks of it, is impossible), we get this:

2016: 11 nm transistors
2018: 5.5 nm
2020: 2.75 nm
2022: 1.375 nm
2024: 687.5 pm
2026: 343.75 pm
2028: 171.875 pm
2030: 85.9375 pm

And then we're smaller than the smallest atom. However, this is not smaller than the nucleus of the hydrogen atom(a single proton), which is still orders of magnitude smaller than the total size of the atom. But it's hard to imagine right now being able to have transistors smaller than an atom's width. We'll likely need something entirely new that can act like a transistor but isn't. Much like transistors act like vacuum tubes but aren't.

Just for completeness, 50 years out we would expect a transistor to be 0.00008392333984375 nm, or ~0.084 picometers. This is 84 femtometers, and the currently theorized size of a proton is 0.87 femtometers, so 50 years from now transistors would still be 2 orders of magnitude away from being smaller than the hydrogen's nucleus. Even the heaviest atoms' nuclei only ranges up to about 15 fm, so it would take ~56 years from now to be able to say transistors are smaller than atomic nuclei. It would be 64 years before transistors are smaller than a hydrogen atom's nucleus. Of course all of this is complete bullshit because Moore's law was really just Moore's observation of a coincidence. Even now we're not halving the size of transistors every two years.

Re:to save others googling (1)

Anonymous Coward | about 2 months ago | (#47630089)

you are halving when it's area not length - you should be multiplying by .66

that comes out to;
2016: 11nm
2018: 7.26nm
2020: 4.7916nm
2022: 3.162456nm
2024: 2.08722096nm
2026: 1.3775658336nm
2028: 909.193450176pm
2030: 600.06767711616pm
2032: 396.044666896666pm
2034: 261.389480151799pm
2036: 172.517056900188pm
2038: 113.861257554124pm
2040: 75.1484299857217pm
2042: 49.5979637905764pm
2044: 32.7346561017804pm
2046: 21.6048730271751pm
2048: 14.2592161979355pm
2050: 9.41108269063746pm

Yes this creates problems, but is made worse if you assume the current chip starts at the 22nm level. the current chip is probably a few orders of magnitude larger as that makes it much more economical to produce in limited numbers. Also there'e nothing to stop Moores law continuing if physical sizes of chips are just allowed to grow...

Re:to save others googling (0)

fnj (64210) | about 2 months ago | (#47633785)

Er, the square root of 2 is 0.707, not 0.66.

Re:to save others googling (2)

timeOday (582209) | about 2 months ago | (#47629999)

If neuron-like processing turns out to be advantageous, there will be much more efficient ways to implement them than using tens of thousands logic gates to simulate each one.

Re:to save others googling (1)

saider (177166) | about 2 months ago | (#47630327)

Moore's law applied to transistor count and the atomic limit only applies if you limit yourself to 2 dimensional chips.

Re:to save others googling (1)

fnj (64210) | about 2 months ago | (#47633803)

Do you really believe that? Even iIf your transistor is 100 atoms high, it still can't be less than 1 atom wide or deep.

Re:to save others googling (1)

Anonymous Coward | about 2 months ago | (#47629605)

Here is hoping they actually keep working at it. You know what IBM are like!
All those plans for Cell, all wasted. Then Power went down the drain and one of their largest buyers (Apple) ditch them because Power was lacking compared to x86, which is just holding back everything.

This is a genuinely interesting thing, possibly the best thing they have made in the longest time in fact.
I couldn't see any reason DARPA wouldn't also be very interested in it, if it works as well as they say it does. Already it is a billion dollar idea in a chip just from the summary description with regards to image recognition.
I'd be surprised if Google doesn't see some interest in this as well. (and NSA of course, why wouldn't they love this?)

Re:to save others googling (1)

Graydyn Young (2835695) | about 2 months ago | (#47629749)

It's really hard to say how many of artificial neurons we would need to make a human-like intelligence, but it's certainly going to be less than the number of neurons in a human head. Computers already do a heck of a lot of tasks better than a human. Heck, using traditional computing methods with just a couple of these chips for image recognition and the like would already make a beast of a machine.

Re:to save others googling (2)

peragrin (659227) | about 2 months ago | (#47629817)

Yep like image recognition, and audio recognition.

Oh wait.

Computers can do logical operations better yes. Computers can't do fuzzy math, real time image recognition or real time audio recognition. Let me know when a computer can "see" with a pair of cameras. Identify an object heading toward the cpu(not just the cameras) and adjust its motors to dodge the incoming. Bugs can do that much yet computers can't.

Re:to save others googling (2)

mark-t (151149) | about 2 months ago | (#47631139)

Babies can't do any of those things very well either.

Bad example (1)

DrYak (748999) | about 2 months ago | (#47633163)

Let me know when a computer can "see" with a pair of cameras. Identify an object heading toward the cpu(not just the cameras) and adjust its motors to dodge the incoming.

That actually do already exist.
It's a car's collision avoidance system.
It's already standard option from some manufacturer (e.g.: Volvo) (and should become mandatory in EU somewhere soonish).

Some like Mobileye rely entirely on camera, while other are integrating other sensors in the mix, like radar, infra red lasers, etc.

But yeah I see your point: complex task require complex network, way much more than this chip.

Re:to save others googling (1)

Anonymous Coward | about 2 months ago | (#47630365)

Your brain has over a dozen different types of neurons with different functions and individual neurons themselves can have varying structures that can do more complex functions (like AND/OR/NEGATING within the same cell with different groupings of inputs)

Some brain Neurons have thousands of inputs from nearly that many nearby nerve cells and brains have overall layer patterns often with broad regional interconnects of different specific functions.

A million standard IBM-neuron each with 256 synapse inputs (with what variance of signal?) may do only a tiny fraction of what a million real brain cells do.

Human brain is up in the 10 billion neuron range (synapses interconnects in the trillions)

Re:to save others googling (0)

Anonymous Coward | about 2 months ago | (#47631811)

it's certainly going to be less than the number of neurons in a human head

Yes. The cerebral cortex is only (max) 25 billion neurons.
Of those, a large number is related to things that concern the body and social behaviour. It is debatable whether those neurons are necessary to understand bodies and social behaviour and whether that understanding is an essential part of human(-like) intelligence, but it's pretty clear that they are superfluous for many of the tasks we deem interesting for these kinds of chips.

Re:to save others googling (0)

Anonymous Coward | about 2 months ago | (#47630011)

so right now we can simulate a lobster.
so link it to a watson and a natural language processor and we now have the beginning to accelerando?

Re:to save others googling (0)

Anonymous Coward | about 2 months ago | (#47630217)

so right now we can simulate a lobster.

Yep, that's why they're calling it Zoidberg.

Re:to save others googling (1)

Anonymous Coward | about 2 months ago | (#47630075)

"IBM has already succeeded in building a 16-chip system with sixteen million programmable neurons and four billion programmable synapses. The next step is creating a system with one trillion synapses that requires only 4kW of energy. After that IBM has plans to build a synaptic chip system with ten billion neurons and one hundred trillion synapses that consumes only one kilowatt of power and occupies less than two liters of volume."

I think the IBM roadmap is more aggressive than Moore's law, and of course getting to "human brain level" in a supercomputer is a big deal even if it doesn't happen in a single chip.

If the ultra low power consumption holds up, maybe it will be easier to make a 3d stacked chip version. It could certainly become comparable to a brain. 100 layers would be the equivalent of ~10-12 years of "Moore's law" scaling.

Re:to save others googling (0)

Anonymous Coward | about 2 months ago | (#47630417)

What about timescale? If the chip is significantly faster than a human neuron, it could reach parity much sooner.

Re:to save others googling (1)

Anonymous Coward | about 2 months ago | (#47630757)

Each synapse contains dozens or hundreds of individual receptors that interact with the chemicals (neurotransmitters) being released to transmit the message. Certain types of receptors, called metabotropic, set off a cascade of enzymatic reactions inside the cell that represents further, highly complex, information processing. So when calculating the number of processing units in the brain, you have to go well beyond counting synapses. It's also worth noting that some of the interactions that take place can only be explained with quantum mechanics, so one way of describing the brain is as a quantum computer with quadrillions of processors.

Re:to save others googling (1)

kylemonger (686302) | about 2 months ago | (#47631509)

The human brain needs all that parallelism because it's switching rate is abysmal, something like 200Hz. We ought to be able to beat that by a factor of million without setting anything on fire, so adjust your numbers accordingly.

Re:to save others googling (1)

roman_mir (125474) | about 2 months ago | (#47633879)

You can simulate this guy [dailymail.co.uk] by 2030 then.

Lepo! (0)

Anonymous Coward | about 2 months ago | (#47629215)

nà posebnega...

So when are we having skynet? (1)

thieh (3654731) | about 2 months ago | (#47629217)

Is it coming soon?

Re:So when are we having skynet? (0)

Anonymous Coward | about 2 months ago | (#47629223)

I hope so. I am so bored and tier of living.

Re:So when are we having skynet? (0)

Anonymous Coward | about 2 months ago | (#47633829)

You're certainly tired of spelling, that's for goddam sure.

Now they can replace IBM managers (5, Funny)

gelfling (6534) | about 2 months ago | (#47629229)

Assuming of course this chip can hold 2 hr conference calls with 40 other chips and pound out 240 page Powerpoints.

Abort (0)

Anonymous Coward | about 2 months ago | (#47629243)

Tinman Abort!

Is it real or a Real Doll? (0)

Anonymous Coward | about 2 months ago | (#47629255)

Don't look for brain-like chips in the open market any time soon, though. TrueNorth is part of a DARPA research effort that may or may not translate into significant changes in Real Dolls.

Re:Is it real or a Real Doll? (1)

distilate (1037896) | about 2 months ago | (#47629295)

real dolls with minor amounts of brains could help solve the navys join the navey, feel a man problem! Ie preventing rampant homosexuality during long missions

Re:Is it real or a Real Doll? (1)

Anonymous Coward | about 2 months ago | (#47629783)

real dolls with minor amounts of brains could help solve the navys join the navey, feel a man problem! Ie preventing rampant homosexuality during long missions

And here we have a demonstration of an organism with 256 million synapses.

Re:Is it real or a Real Doll? (2)

tekrat (242117) | about 2 months ago | (#47629995)

Walking, talking sexbots would absolutely change the world, possibly eliminate most crime, might solve the problem of overpopulation, and as theorized in the manga/anime Chobits, might force real women to have to compete for male attention.

Re:Is it real or a Real Doll? (1)

CastrTroy (595695) | about 2 months ago | (#47632539)

Futurama [wikipedia.org] already covered that issue.

Re:Is it real or a Real Doll? (0)

Anonymous Coward | about 2 months ago | (#47634271)

Overpopulation is quite likely already averted by socioeconomic development. Ignoring immigration and emigration, population is already dropping in Europe, US, Japan, and plenty of other highly educated countries.

First products to come? (0)

Anonymous Coward | about 2 months ago | (#47629329)

When can we hope to see reports of the first brown-faces executed by drones incorporating this exciting new technology?

HAL 9000? (2, Funny)

Anonymous Coward | about 2 months ago | (#47629369)

Where are all the HAL 9000 jokes? HAL was built by IBM in "2001: A Space Odyssey", perhaps this is an example of life imitating art?

Re:HAL 9000 jokes.... (1)

tekrat (242117) | about 2 months ago | (#47629717)

I'm sorry Dave. I'm afraid I can't do that.

HAL 9000? (0)

Anonymous Coward | about 2 months ago | (#47630693)

or skynet for that matter...

Re:HAL 9000? (0)

Anonymous Coward | about 2 months ago | (#47632985)

HAL was not build by IBM, it was build by the University of Illinois. It is mearly a (claimed) unintentional conisidence that each letter of the acronim is one away from IBM.

[The more you know...]

ob Heinlein (1)

Alrescha (50745) | about 2 months ago | (#47629429)

"Human brain has around ten-to-the-tenth neurons. By third year Mike had better than one and a half times that number of neuristors. And woke up." -- The Moon is a Harsh Mistress

A.

Pulse Pattern (1)

Jim Sadler (3430529) | about 2 months ago | (#47629541)

Is this pulse recognition closer to Morse code or bar code? Obviously pulse code recognition could go beyond binary bit codes but the hardware must be seriously different from what we have now. It could even be analog.

Re:Pulse Pattern (0)

Anonymous Coward | about 2 months ago | (#47630275)

Do they mean 'pulses' as in waves of signals (seperated by inactivity) traversing the 'brain' circuitry (like it does in your brain through different lares and regions)

Or 'pulse' patterns of different kinds along individual connections (which I would guess get interpretted different ways by different flavors of neurons, hooking into the connections like it does in your brain) where the rapidity/frequency, amplitude, duration of a pattern of pulses impart greater/lesser signal strength (versus binary encoding).

4096 neurons - great - only 10^11 to go! (0)

Anonymous Coward | about 2 months ago | (#47629707)

One estimate (published in 1988) puts the human brain at about 100 billion (10^11) neurons and 100 trillion (10^14) synapses.
http://en.wikipedia.org/wiki/Neuron

Re:4096 neurons - great - only 10^11 to go! (0)

Anonymous Coward | about 2 months ago | (#47634259)

it mimics one million human neurons and 256 million synapses

Should've finished reading the sentence.

imagine (0)

Anonymous Coward | about 2 months ago | (#47629877)

Imagine a Beowulf Cluster of these!

The Genisys of Skynet! (1)

wiredog (43288) | about 2 months ago | (#47629885)

As good as any other theory, I guess.

Flaw in the description (0)

Anonymous Coward | about 2 months ago | (#47629915)

>"Already, TrueNorth has a proven 80% accuracy in image recognition"

This is where the whole thing falls down. Software can emulate any hardware you build, even quantum. So the accuracy of a particular image recognition algorithm is irrelevant, given that you didn't need this processor for it, and the implication seems to be that this processor has to use a neural net of some kind. Neural nets are known to be less efficient and accurate than tuned algorithms on the same hardware.

Be wary of people citing soft output accuracy ratings of hardware chips. Now if someone were comparing the software implementation to the hardware implementation of a particular implementation, it might demonstrate performance improvements...

Re:Flaw in the description (1)

fisted (2295862) | about 2 months ago | (#47634269)

Software can emulate any hardware you build

And what about the performance implications of emulating my massively parallel hardware in software?

Human Brain is...complicated (2, Informative)

Scottingham (2036128) | about 2 months ago | (#47630017)

Every time one of these damn 'neural computers' come out people tend to equate the number of neurons and synapses and think 'hey, if we can get to the number of human neurons... Presto!!!!1'

Brains are waay more complicated than just neurons and synapses. Just taking the neurotransmitters into account makes the whole charade crash down. Then there is the glial network that, surprise surprise, does an enormous amount of complex work. There's even recent research suggesting that the branching patterns of the neurons perform complex computations. There are chemical gradients in the brain that act as a sort of addressing system.



tl;dr Brain on a chip? Yeah fucking right.

Word! (0)

Anonymous Coward | about 2 months ago | (#47631125)

I'd also like IBM to tell us how it compares to Kwabena Boahen's chips, which have been eating IBM's lunch in this space for a decade.

Re:Human Brain is...complicated (0)

Anonymous Coward | about 2 months ago | (#47631679)

I had a psychology professor give a very apt quote in a presentation "how do we go from synapse firing to midget wrestling?". It's a good quote since we know the chemistry involved in neural communication. We know (or at least have a good idea) about motivations and why we do what we do or think the way we do. We don't know about that entire middle ground.

How are memories formed?
What are the physical components of memories?
What distinguishes one memory from another?
How is a personality encoded into the brain?
How does signal processing actually work?

Throwing a better physical representation into the mix still won't answer these questions.

Only a baby step - long strides still required bey (0)

Anonymous Coward | about 2 months ago | (#47630223)

Hardware advancements will be useful. But the old problem remains.

The problem is getting the pattern of this cellular logic needed ( typical nerve type operations are summations of positive and negative weights from broad interconnects (synapses) and layers of neurons) for whatever specific problem space is to be handled.

You grow/train into the pattern using training data sets over and over that feed corrections backwards through the layers (you supply the inputs and tell it what the right outputs are and the logic slowly forms to process that way). A key problem is that the more complex the net is the more likely that formation fails and wont produce the processing results desired.

Fixed problem spaces work (Kanji Alphabet stroke pattern hand input reading was an early success - with its limited standard input forms that people can adust to use). One logic pattern built up (months to get it right) then cloned into a multitude of devices that all work the same way on the same expected inputs.

As the article said they DIDNT include the learning system in the hardware. THAT is a mechanism far more complicated to implement (much of your brain's cells and operation goes to implement the building and modifying (and maintaining) of the patterns stored/formed in your neurons/synapses.

Unfortunately THAT is the part you need to get to work the most to have a useful learning brain-like function.

Beyond that, the real problem still is : WHO is going to tell the 'brain' what the result should be (and if it is right or wrong)? If you cannot feed fixed training data into a system like this (mass input set with results/conclusions/outputs precalculated ) and you try to do it dynamically, users (whoever its being customized for) will become VERY VERY tired of correcting the system over and over. New input-output patterns potentially can disturb what is already formed and remedial 'learning' is required to restore the previous functionality The disruption is ongoing and increases the more logic patterns get added.

So this really only counts as an early partial step in what such systems will have to be comprised of - the summations hardware is actually trivial compared to the learning processing which will be required.

Porn Detection (0)

Anonymous Coward | about 2 months ago | (#47630913)

> TrueNorth has a proven 80% accuracy in image recognition

A related project is working on detecting pornography in images. It's called True Peter North.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?