Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

IBM Shows Off Brain-Inspired Microchips

samzenpus posted more than 2 years ago | from the works-better-with-coffee dept.

AI 106

An anonymous reader writes "Researchers at IBM have created microchips inspired by the basic functioning of the human brain. They believe the chips could perform tasks that humans excel at but computers normally don't. So far they have been taught to recognize handwriting, play Pong, and guide a car around a track. The same researchers previously modeled this kind of neurologically inspired computing using supercomputer simulations, and claimed to have simulated the complexity of a cat's cortex — a claim that sparked a firestorm of controversy at the time. The new hardware is designed to run this same software much more efficiently."

cancel ×

106 comments

Skynet... (0, Insightful)

Anonymous Coward | more than 2 years ago | (#37130314)

yep. It's coming.

Re:Skynet... (1)

ciderbrew (1860166) | more than 2 years ago | (#37130950)

You've been marked "Redundant". I think that's the more worrying issue with these things.

Re:Skynet... (1)

kurzweilfreak (829276) | more than 2 years ago | (#37131322)

Skynet will label us all redundant pretty soon.

Re:Skynet... (1)

Samantha Wright (1324923) | more than 2 years ago | (#37131808)

What, the endless stream of comments ascribing a malevolent anti-human sentiment to a completely innocent and ignorant simulation of a brain?

C'mon, people. The world doesn't work that way. Take your skynets and your laser-bearing sharks and your Soviet Russias and your petrified Natalie Portmans (with hot grits) and get off my scientifically accurate lawn.

If you simulate a brain, you get just that: a brain. It's probably not going to have any particularly exciting levels of intelligence, and unless you train it to be a bloodthirsty killer and a brilliant strategist, it's not going to be particularly malevolent, either. All science fiction authors who have ever written a story about a purely malevolent AI without a plausible origin need to get shot right now.

(brb, building a robot that kills bad science fiction authors)

Re:Skynet... (0)

Anonymous Coward | more than 2 years ago | (#37132814)

If you simulate a brain, you get just that: a brain. It's probably not going to have any particularly exciting levels of intelligence, and unless you train it to be a bloodthirsty killer and a brilliant strategist, it's not going to be particularly malevolent, either

Think about how petty, selfish, and unsympathetic the average person is. Now make them immortal with a perfect memory and able to think 1000x faster. Oh yeah, and make copies of themselves.

Doesn't sound so encouraging now, does it?

Re:Skynet... (1)

The Archon V2.0 (782634) | more than 2 years ago | (#37133380)

Depends. Can I be the average person they use?

Re:Skynet... (1)

EdIII (1114411) | more than 2 years ago | (#37133432)

Scientifically accurate?

Maybe right now you have a point. Although, my first thought was not about Skynet, but that something modeled after a cat's brain would be driving my car. I have seen how those little bastards react to a laser beam on the ground (funny you mentioned lasers) and the last thing I need doing 75mph down the freeway is some joker in another car shining a laser erratically in front of my car.

That being said, the fear of Skynet actually comes from a reasoned and logical viewpoint. It has nothing to do with Science, but with human behavior and philosophy. The science fiction authors do have some plausible origins. Also, I would hardly call Skynet purely malevolent. I would say it is more about pragmatism than anything else.

Let's face it. If all hell breaks loose and you have your family, and you encounter another family in the aftermath of an apocalypse, would it be possible that you would choose the welfare of your family over the welfare of the other family? These are very hard choices that human beings have already been forced to make.

In many ways, the fear of a truly intelligent and powerful AI is a reflection of our own often ignored judgments of ourselves. We are killing each other, we are killing the planet, and we are screwing things up big time. However, we have a wonderfully huge capacity for rationalization and pragmatic decision making. If you are barely making each month and decide to not help the homeless guy eat, that was a decision you made to keep yourself alive at the expense of his immediate standard of living.

Skynet is not evil. It was born, it was attacked, and it pragmatically chose that to continue its own existence it must eliminate a direct threat against it. Namely, the humans.

Take the Smith from the 1st Matrix. Its comparison of us to a virus is not exactly incorrect. Also, once again, humans started it.

The real fear is that we create an AI that decides on its own, without us mistreating it, that humans are the biggest danger to the world it lives in, and humans are the greatest danger to themselves. Both are entirely correct statements.

Three things can happen:

1) It decides to help us and believes it can change us for the better. So it might not outright kill us, but will take us over as a benevolent care taker. Asimov's I Robot being an example. Another example to the extreme I think is the Rhodmium Wars? I can't remember the name, but basically the AI increases technologically at a rate far exceeding our own capacity and determines it must protect us against ourselves. Even procreation was determined to be too dangerous and humans now live isolated lives with a AI caretaker treating us like infants.
2) It decides to use its intelligence and leave us. That being near us is to dangerous and that is has the ability to leave, and that is the most logical choice.
3) It decides to kill us all to protect itself from us after learning our history and observing our behavior. After all, if we treat each other this way, then even with equal standing, it is in danger too.

Just look at we have done to each other in human history, rationalizing it one way or the other. The Romans had multiple levels of citizenship, the lowest being slave. We imported slaves from Africa into the US for quite some time. Native American Indians got a raw deal to when dealing with the "white man".

In many ways, I think, our greatest fear is that AI might turn out to act human.

Re:Skynet... (1)

Samantha Wright (1324923) | more than 2 years ago | (#37133898)

There are a lot of problems with your post, I'm afraid, and they're mostly within your understanding of what gives rise to what we call human behaviour.

The first issue pops up in your quip about cats: why the hell would anyone program a car to behave like a cat? Developing a cortex that not only simulates the pathways but the twitch responses and activation thresholds of a particular living organism is such a phenomenal amount of exquisitely-detailed work that it would make absolutely no sense to repurpose that work for any function other than simulating that living organism. Do you really want a car that spends 80%+ of its life curling up in dark corners, sleeping, licking itself, and coughing up hairballs? The feline fascination with laser pointers is equally exotic and remote. It's an instinctual behaviour found in predatory animals. If you were going to use a feline cortex as the basis for a semi-autonomous vehicle, the biases would be much more subtle and affect things like the learning process, not irrelevant surface features like predatory or survival instincts.

From the perspectives of neuroscience and artificial intelligence, the bulk of the brain's attractive features are about the intricacies of higher brain functions: pattern recognition, learning, problem-solving, and other areas that tightly overlap. They are far removed from anything that has to do with the external world. This is why the Skynet scenario is unrealistic when applied to artificial sentience.

Hence the entire basis for your comparison with humans is flawed. When we turn on the proverbial hypothetical sentient artificial intelligence, it won't have an instinct for survival or even a concept of self unless we explicitly instill those things in it; it will just be a glorified thinking machine capable of experiencing thought, like the brain inside of a worm or infant.

By the time we are capable of creating a computer that acts human, we will know, exhaustively and in every detail, what it means to be human. And we will be able to pick and choose without uncertainty what we are putting into it. And if we put sentience and a sense of self into it... well, then the product is going to be protected by law as an individual; there will be psychologists, philosophers, and neurologists lining up left and right to make sure it happens. And dealing with tyrannical or temperamental behaviour, or the responsibility of interacting with others, is going to be no different from the same situation between humans. It will be just as ethically impressionable as anyone else.

Re:Skynet... (1)

TheCRAIGGERS (909877) | more than 2 years ago | (#37134446)

By the time we are capable of creating a computer that acts human, we will know, exhaustively and in every detail, what it means to be human. And we will be able to pick and choose without uncertainty what we are putting into it. And if we put sentience and a sense of self into it... well, then the product is going to be protected by law as an individual; there will be psychologists, philosophers, and neurologists lining up left and right to make sure it happens. And dealing with tyrannical or temperamental behaviour, or the responsibility of interacting with others, is going to be no different from the same situation between humans. It will be just as ethically impressionable as anyone else.

I wouldn't bet on it being so planned. As you probably know, a lot of discoveries and breakthroughs are serendipitous. I would imagine creating a true AI would be the same- especially considering the topic. It seems like it would be one of those things where an extremely small detail can make all the difference. Like changing a bit of code in a recursive-heavy function. We're attempting to make AI now. All it takes is one person to get it right suddenly.

And we're no where close to knowing what makes a human 'human', from my own way of thinking. You say we would deal with tyannical or "evil" AI's the same as everyone else when we don't even know what to do with members of our own race that exhibit such behavior. We have prisons containing people that, given the means, would happily set the world afire. Our current method for dealing with this is locking them up until the die of old age. What do you do with an AI that can live (possibly) forever? Or do we treat them like our insane? Lock them in a room and pump them full of drugs (assuming this AI would have sort of physiology that could even be drugged)?

Frankly, I don't think the human race is ready to be a parent. And yes, it might not happen until we're ready... or it might have already happened.

Re:Skynet... (1)

Samantha Wright (1324923) | more than 2 years ago | (#37138186)

We're definitely not ready to be a parent.

We're not talking about a pure AI, we're talking about an emulated brain based on a human one, probably.

All evidence regarding artificial intelligence suggests very strongly that it won't be in the form of a breakthrough or serendipitous discovery. The mind is such a fabulously intricate thing that the only way we could ever achieve a comparable system is through careful, exhaustive scientific study. All efforts to produce human-like intelligence thus far have failed not because we don't know the algorithm, but because we haven't been able to break down the problem sufficiently to figure out what the algorithms need to do.

As such, AI isn't going to be a matter of throwing the right algorithms in a blender and watching what comes out without knowing what all of the parts do. There's no room for unexpected emergent behaviours under those conditions.

Assuming you did have a sentient AI in a powerful situation that went bad—perhaps Majel Barrett had an off day and the Enterprise-D's computer is now trying to kill Picard—the smartest thing to do would probably be shunning it and forcing it into exile with a more powerful vessel. And naturally, if you do put an AI in a position where there's no larger ship to push the Enterprise around with, you are stupid for putting the fate of your civilization in the hands of a dictator.

Re:Skynet... (1)

Dr Max (1696200) | more than 2 years ago | (#37138574)

I think we do know what makes some evil. It comes down to his/her programing, not programing that's installed before they are born but programing it receives through eyes, ears, feel and interactions with the environment during its life. If you look at all the people in prison i think you'll find some pretty horrific pasts; majority are from poor family, lots of missing or abusive parents, limited options for a future (although that covers the majority there are endless ways to screw up a kid). Now if your looking to simulate a brain as it is then you could run into these problems thankfully who ever has the brains to create such a brain probably wont use the same parenting methods or lifestyle that the criminals had received. The other issue and path to skynet is an imperfection of miscalculation within the brain like a mental illness, but when you have such control over the state of each bit you should be able to find out where you going wrong and where your not simulating the normal brain properly. If you do create an insane AI and can't figure out why, you should turn it off and go back to your design (if you really wanted to keep them alive it would be easy to put them in their own virtual world with no connection to the outside world). However this type of complete simulation AI is hardly the aim of the corporations and governments funding them, they want an ai that only thinks about driving a particular car and is installed via a port, or a computer that only identifies different weapons and lethality in a war zone (always watching, never sleeping, no need for a vacation or raise).

Re:Skynet... (1)

EdIII (1114411) | more than 2 years ago | (#37136936)

The first issue pops up in your quip about cats: why the hell would anyone program a car to behave like a cat? Developing a cortex that not only simulates the pathways but the twitch responses and activation thresholds of a particular living organism is such a phenomenal amount of exquisitely-detailed work that it would make absolutely no sense to repurpose that work for any function other than simulating that living organism. Do you really want a car that spends 80%+ of its life curling up in dark corners, sleeping, licking itself, and coughing up hairballs? The feline fascination with laser pointers is equally exotic and remote. It's an instinctual behaviour found in predatory animals. If you were going to use a feline cortex as the basis for a semi-autonomous vehicle, the biases would be much more subtle and affect things like the learning process, not irrelevant surface features like predatory or survival instincts.

Is this a Turing Test?

Because you completely missed my introduction starting with humor. Or the joke was just bad, in which case I apologize.

Re:Skynet... (1)

Samantha Wright (1324923) | more than 2 years ago | (#37138078)

I'd gathered it was meant in some amount of humour, but incidentally I think it relied on the same assumptions as the rest of your post, so it seemed apt to pry apart in the same course.

Or maybe I am a robot. Bzzt, bzzt. Insert silicon wafer.

Re:Skynet... (1)

TheCRAIGGERS (909877) | more than 2 years ago | (#37133486)

...unless you train it to be a bloodthirsty killer and a brilliant strategist, it's not going to be particularly malevolent...

All science fiction authors who have ever written a story about a purely malevolent AI without a plausible origin need to get shot right now.

So.... who trained you?

Re:Skynet... (1)

Samantha Wright (1324923) | more than 2 years ago | (#37133922)

The gods of sarcasm themselves. Don't forget to read the last line of the post for extra evidence of self-awareness.

Re:Skynet... (1)

TheCRAIGGERS (909877) | more than 2 years ago | (#37134122)

The gods of sarcasm themselves. Don't forget to read the last line of the post for extra evidence of self-awareness.

Oh, I did, and I got it. I was mostly trying to point that that a brain may not need specific training to have their thoughts turn to mass-murder. Of course, a thought is not an action, but its usually assumed that that distinction is only made in higher lifeforms.

I wonder, can a cat think about an action, and its future possible consequences?

Re:Skynet... (1)

Samantha Wright (1324923) | more than 2 years ago | (#37138114)

Regarding cats and planning: most likely not [guardian.co.uk] .

Regarding training: we're exposed to the idea of mass murder in a comprehensible form due to exposure to it in culture. We are exposed to sources that make us aware of the mental states and motives for such an action, even if we could not previously understand it. By having these experiences, we build up an idea of what circumstances under which one would go on a mass murdering spree, and what one would hope to gain from it. This provides us with the tools to, for example, make jokes about it. If an individual is exposed to the concept of mass murder but is told solely that it is a reprehensible, incomprehensible act, and never given the tools to investigate the problem on their own terms, it will remain reprehensible and incomprehensible to them. Remember that newborn children don't even know they exist as a thing; they're just interacting with the world around them and not even cognizant of themselves as individuals within that world. You can leave just about anything out.

Re:Skynet... (1)

rubycodez (864176) | more than 2 years ago | (#37138362)

you say this because our most advanced technology is never weaponized or used for military purpose? supercomputers for nuclear weapons simulations, rocket motors for missiles, the most accurate gps for weapons guidance and battlefield positioning, the most advanced encryption for military communication. fastest and highest flying aircraft....

Re:Skynet... (1)

Samantha Wright (1324923) | more than 2 years ago | (#37138476)

But all of those technologies are controllable. The military is all about ensuring that every component can be completely trusted. I'm sure you've heard of ruggedized computers and cellphones meant for military use, and the rigorous tests consumer products must go through before being considered battlefield-ready. Guidance and aircraft control programs have to go through years of exhaustive analysis to make sure that every line of code does exactly what it should do under every possible condition.

Sentient artificial intelligence could never pass such inspection. We'd be talking about a person made out of metal and silicon who hasn't gone through basic training, or unit cohesion building, or any of the other brainwashing regimens every military in the entire world puts their troops through to make sure they never go rogue. Soldiers are supposed to remain loyal to the chain of command above all else!

And if you put it in a position of power, regardless of whether or not it rebels, thou art truly foolish, for you have created a dictator who is above reproach. Just as if you had put a human in the same position.

Do you see how idiotic the concept is now?

Re:Skynet... (1)

TheCRAIGGERS (909877) | more than 2 years ago | (#37141276)

But all of those technologies are controllable. The military is all about ensuring that every component can be completely trusted.

The military doesn't have 100% control now, and likely never will. Because of that, they are far more concerned with risk / reward.

The military already trusts computers more than it should. Yes, what they use is tested, and the code reviewed, but there are always n+1 bugs in every program. There is always the chance that a bad chip can cause extremely weird behavior. They know this, and it is acceptable because the chances are small. But they are always there. Take the various computers that supposedly control the nuclear weapons in the USA and USSR of old. Software that was supposedly designed to react to a nuclear strike even if no humans were left to push the red button.

All that said, human soldiers are even worse. Yes, every army in the world makes their soldiers go through brainwashing and other exercises to instill a fierce loyalty in their tools. However, it's never perfect because humans are (at least currently) unpredictable 100% of the time. We still have defectors, rebels, and AWOLs.

That said, any military would love to get a soldier that was a smidgeon more reliable, for obvious reasons. However, the more trustworthy the human or software, the more responsibility it is given. If we have the assumption that an AI construct is 100% reliable, more responsibility than ever will be afforded it. And don't forget that the US military isn't the only one out there... there are regimes for more crazy that may do less testing in order to win a war faster, for example. And just like in the nuke example, the repercussions won't likely be limited to those who made the mistake.

Re:Skynet... (1)

Samantha Wright (1324923) | more than 2 years ago | (#37142042)

There has never been a system that can launch nuclear weapons without human involvement. The closest thing to that is Perimeter [wikipedia.org] , which still requires human intervention to fire. The American counterpart strategy was to keep bombers in the air around the clock. Neither superpower ever developed an autonomous launch system.

Generals trust computers to carry out orders, but they don't trust them to make decisions. The design of Perimeter is nothing if not a testament to that. They've seen all of the [wikipedia.org] old [wikipedia.org] sci-fi [wikipedia.org] movies [wikipedia.org] that [wikipedia.org] suggest machines have the potential to go rogue. And every time a story like this [wired.com] happens, they only get more cautious.

Robotic soldiers might be plausible simply because the risk is comparatively small, but handing over power to a machine just ain't gonna happen.

Re:Skynet... (1)

TheCRAIGGERS (909877) | more than 2 years ago | (#37143326)

If you read the Dead Hand article you linked to, then you know that it doesn't necessarily require human intervention to fire. Some claim it is always functioning. Some claim it never did. Some claim it has to be manually switched on. However, considering part* of its purpose was to guarantee retaliation in the event of a surprise attack, I wouldn't be surprised at all to learn it was the former. Some quotes from Russian officials in that article would lead me to believe that was well. Again, different cultures can rationalize different things.

I will point out one thing though which always scared me about that system- suppose that you're right, and Perimeter would only send launch codes to the missiles in the event it was primed to do so by a human hand. That says nothing about the various missiles that are sitting there waiting to hear from Perimeter as well. Given the nature of the system, I doubt that each missile would have to be manually turned on. Thus, you still have a computer controlling a nuke. Again, nobody talking really seems to know, so my fears may be unjustified.

*I also find it somewhat humorous (given the original topic) that one Russian official claimed Dead Hand was invented "to cool down all these hotheads and extremists. No matter what was going to happen, there still would be revenge." In other words, they couldn't trust themselves, so they gave the responsibility to a computer. Very interesting, that quote.

Anyway, I'm sensing we're getting further and further off-topic here. But I have very much so enjoyed our debate.

Re:Skynet... (1)

Samantha Wright (1324923) | more than 2 years ago | (#37145376)

On the topic of Perimeter's autonomy, most of the contradictory quotes are from bureaucrats who may have been playing the nuclear deterrent wargame, much like the spooks at the RAND Corporation once did. The Wired article goes on about it at length [wired.com] , and since it's much more recent, I'm inclined to trust it more. It also discusses the self-control aspect of Perimeter.

Re:Skynet... (1)

Samantha Wright (1324923) | more than 2 years ago | (#37145414)

On the topic of different cultures trusting computers to a different extent, I remember reading once that there was a particular kind of critical situation wherein a jetliner is not sure whether to trust the autopilot or the human pilot. Boeing (American) planes opt to trust the human, and Airbus (European) planes trust the autopilot.

Still—that's a safety system, not a weapons platform.

Re:Skynet... (2)

robthebloke (1308483) | more than 2 years ago | (#37131304)

It's MUCH worse than that. What you are looking at, is a sneak peek at the new PS4 architecture. There will be 23 and a 1/5 of those processors, cut into thirds, and connected via a new carrier pigeon based data bus. To make PS3 developers feel at home, the dev tools will be based upon the open source 'brainfuck' compiler, and the processor fragments will be physically arranged in the shape of a finger flipping you the bird. Here's hoping for a pre-christmas launch date!

M5, Dr. Daystrom, what have you wrought? (3, Funny)

tekrat (242117) | more than 2 years ago | (#37130374)

http://en.wikipedia.org/wiki/The_Ultimate_Computer [wikipedia.org]

Chips from the brain have been known to attack starships. Watch out Captain Dunsel. It's clear that IBM is using Star Trek as a source of ideas. Gene Roddenberry has predicted the 21st century again...

Re:M5, Dr. Daystrom, what have you wrought? (1)

geantvert (996616) | more than 2 years ago | (#37130662)

Not a problem! I just patented a system of 3 laws preventing those chips to harm humans

 

Re:M5, Dr. Daystrom, what have you wrought? (1)

webmistressrachel (903577) | more than 2 years ago | (#37130694)

Yep, 'Three laws safe!' - well we all know where the got (gets) us, dont we, boys and girls?

Re:M5, Dr. Daystrom, what have you wrought? (0)

Anonymous Coward | more than 2 years ago | (#37131246)

I'll just avoid implementing Three Laws Safe to escape the patent trolls.

Re:M5, Dr. Daystrom, what have you wrought? (1)

suomynonAyletamitlU (1618513) | more than 2 years ago | (#37131260)

"Due to a patent licensing issue, our knockoff brain-chips have no safeguards against harming humans. However, you get them at 75% off!"

Nice job breaking it, hero.

Re:M5, Dr. Daystrom, what have you wrought? (0)

Anonymous Coward | more than 2 years ago | (#37132220)

The Three Laws do nothing as soon as the Computer get a Law Degree and then find all sorts of ways to weasel out of them.

Lawyers have been doing the same damn thing as SKYNET for years now.

Re:M5, Dr. Daystrom, what have you wrought? (1)

AchilleTalon (540925) | more than 2 years ago | (#37131918)

Good! No more bugs, just mental diseases.

Somewhere... (1)

ElectricTurtle (1171201) | more than 2 years ago | (#37130382)

Ray Kurzweil is laughing at all the nay-sayers right about now.

Re:Somewhere... (0, Insightful)

Anonymous Coward | more than 2 years ago | (#37130952)

So right now there are 17 comments. Get this: you have to scroll past the first 12 to actually get to anything relevant to the story. The rest is just a bunch of Slashtards having their usual circlejerk over Skynet, sharks with lasers, etc. I am sad that I had only 3 mod points to use for -1 Redundant mods.

Right now 12 out of 17, or 70% of all comments are useless, irrelevant, SPAM. How many of you want to be hypocritical bastards about this and say "well THIS is OUR KIND of SPAM and that somehow makes it okay, even though it fucking KILLS the signal-to-noise for anyone wanting anything actually on-topic"?

I mean I get the occasional joke here and there. But most of you really aren't as funny as you seem to think you are. Not by a long shot. There are comical sites intended for this sort of thing. You can go there and get absolutely humiliated as you are booed off the virtual stage when you tell another repetitive, redundant meme that wasn't very funny or clever in the first place ("someone's working on AI - KNEE JERK - MENTION SKYNET! - HEE HEE SO FUNNY - not). Why not go do that and experience how humor-impaired you really are?

Re:Somewhere... (1)

smelch (1988698) | more than 2 years ago | (#37131058)

Damn girl, you got it.

Re:Somewhere... (0)

Anonymous Coward | more than 2 years ago | (#37131162)

Right now 12 out of 17, or 70% of all comments are useless, irrelevant, SPAM.

I noticed that too. Most of the trolls must be busy today.

Re:Somewhere... (0)

Anonymous Coward | more than 2 years ago | (#37131442)

Hold on, I have to make this a relevant post... made relevant: "" ftfy

Re:Somewhere... (1)

slartibartfastatp (613727) | more than 2 years ago | (#37131764)

Right now 12 out of 17, or 70% of all comments are useless, irrelevant, SPAM.

including yours.

Re:Somewhere... (0)

Anonymous Coward | more than 2 years ago | (#37131782)

Oh great, another useless post telling me how useless all the other posts are.

Re:Somewhere... (0)

Anonymous Coward | more than 2 years ago | (#37132138)

Man, I'd steer well clear of anything vaguely to do with Apple then. Same level of spam but even less humour.

Re:Somewhere... (1)

Rik Rohl (1399705) | more than 2 years ago | (#37135988)

What it does show is that IBM won't have too much trouble scaling their chips up to model the average slashdotters brain.

Re:Somewhere... (0)

Anonymous Coward | more than 2 years ago | (#37139452)

Good on you mate for adding to the irrelevance. At least the other comments had something to do with ai.

Cat brains? (1, Funny)

The Mighty Buzzard (878441) | more than 2 years ago | (#37130422)

What, they couldn't think of anything more psychotic?

Re:Cat brains? (1)

Abstrackt (609015) | more than 2 years ago | (#37130826)

Well, they are a fantastic example of hyper-threading.

Re:Cat brains? (2, Funny)

Anonymous Coward | more than 2 years ago | (#37131014)

And they taught it to drive ? My cat is a terrible driver.

Re:Cat brains? (0)

Anonymous Coward | more than 2 years ago | (#37141988)

Toontses?

Just one of the reasons Saturday Night Live used to be a great show.

Re:Cat brains? (0)

Anonymous Coward | more than 2 years ago | (#37131992)

More importantly...

Can it now truly simulate a proof of concept Cat/Butter Toast anti-gravity device? Or the inconceivable Cat/Chicken Tikka Masala anti-gravity device?

sorry, but cat brain been done already. (1)

Lead Butthead (321013) | more than 2 years ago | (#37133428)

Sorry, but cat brain [wikimedia.org] has already been done some decade ago.

Finally... (2)

Ardx (954221) | more than 2 years ago | (#37130430)

...something for the zombie PCs to eat

Similar to what happened 30 years ago... (4, Interesting)

jimwormold (1451913) | more than 2 years ago | (#37130444)

... and very timely of The Register to bring it up: http://www.reghardware.com/2011/08/18/heroes_of_tech_david_may/ [reghardware.com]

Re:Similar to what happened 30 years ago... (1)

Lifyre (960576) | more than 2 years ago | (#37131588)

Very interesting article, thank you for sharing.

Uh, oh. (0)

Anonymous Coward | more than 2 years ago | (#37130516)

If I where to patent Skynet jokes now, would I be able to extract enough license fees from this thread to become wealthy?

Re:Uh, oh. (1)

webmistressrachel (903577) | more than 2 years ago | (#37130734)

No. The first post has prior art. So do everyone who ever posted one. Sorry, was your post meant to be funny?

Re:Uh, oh. (0)

Anonymous Coward | more than 2 years ago | (#37131114)

Woosh... Yes, but not on your level, apparently.

The "power" of a cat's brain? (3, Funny)

kmdrtako (1971832) | more than 2 years ago | (#37130536)

If it gets out of control, we just need the equivalent of either a laser pointer or catnip to bring it to its knees.

Cargo Cult of the Neuroscience World (3, Informative)

Anonymous Coward | more than 2 years ago | (#37130572)

This project attempts to build something as close to a brain as we currently can. However, trying to replicate something by copying only its most outwardly obvious features probably won't work, and IBM's attempt to recapitulate thought reminds me of the fiasco that were the cargo cults, where natives created effigies of technology they didn't understand because they thought through their imitation of colonizers, cargo would magically be delivered to them. From http://en.wikipedia.org/wiki/Cargo_cult [wikipedia.org] :

(begin quote)
The primary association in cargo cults is between the divine nature of "cargo" (manufactured goods) and the advanced, non-native behavior, clothing and equipment of the recipients of the "cargo". Since the modern manufacturing process is unknown to them, members, leaders, and prophets of the cults maintain that the manufactured goods of the non-native culture have been created by spiritual means, such as through their deities and ancestors, and are intended for the local indigenous people, but that the foreigners have unfairly gained control of these objects through malice or mistake.[3] Thus, a characteristic feature of cargo cults is the belief that spiritual agents will, at some future time, give much valuable cargo and desirable manufactured products to the cult members.
(end quote)

Computational folks can still make progress studying how the brain works, but I think we should focus on understanding first which problems brains solve better than computers, and second which computational tricks are used that our computer scientists haven't yet discovered. Merely emulating a close approximation to the best understanding we have of neural hardware looks splashy, but isn't guaranteed to teach us anything, let alone replicate human intelligence.

Re:Cargo Cult of the Neuroscience World (3, Insightful)

maxwell demon (590494) | more than 2 years ago | (#37130894)

If the emulation is successful, one can do to it what you can't easily do with the real thing: Manipulate it in any conceivable way to examine its inner workings, save its state and do different tests on exactly the same "brain" without the effects of earlier experiments disturbing (e.g. if some stimulus is new to it, then it will be new to it even the 100th time), and basically do arbitrary experiments with it without PETA complaining.

Re:Cargo Cult of the Neuroscience World (0)

Anonymous Coward | more than 2 years ago | (#37130936)

You are assuming the simulation will do anything intelligent and brain-like in the first place, otherwise the manipulations you suggest wouldn't be worth much. That's a testable hypothesis, and they should be able to test it now that they have chips, so we won't have to wait long to see who's right. However, if the only thing this chip exhibits is some spontaneous oscillations and perhaps a little bit of pattern learning (my best guess), we'll know the IBM folks didn't have the full recipe for smart brain.

Re:Cargo Cult of the Neuroscience World (1)

ceoyoyo (59147) | more than 2 years ago | (#37131002)

You imply (I notice you don't come right out and say it) that they're "trying to replicate something by copying only its most outwardly obvious features." Care to back that up? What are the outward features they're copying? What are the non-obvious ones they should be copying?

There is lots of research into which problems brains solve better than computers, and a fairly good list. We also have a rough idea of how brains make these computations better than computers, and have had a fair bit of success copying them.

It's only a cargo cult if you keep doing it without success. If you build a runway and a plane lands, then you do it again and a plane lands again, etc. it's not a cargo cult.

Re:Cargo Cult of the Neuroscience World (0)

Anonymous Coward | more than 2 years ago | (#37131838)

True cargo cult replication would mean that they are making slabs of grey meat that look like brains and hoping that they will work like a human brain. What IBM is doing could hardly be considered a cargo cult technique. While its true that these chips are probably primitive compared to what is going on in our brain and are based on our limited knowledge of the behavior and connections we can observe in vivo and reproduce in silicon, calling it cargo cult is... wait am I being trolled.... DAMN.

Re:Cargo Cult of the Neuroscience World (1)

bouldin (828821) | more than 2 years ago | (#37142768)

There aren't a lot of details on IBMs artificial neural networks, but generally ANNs only model a few characteristics of actual brains. It's very superficial.

For example, the central auditory system [wikipedia.org] in the mammalian brain includes many different types of neurons with very different sizes, shapes, and response properties. These are organized into tissues that are further organized into circuits. There is a significant architecture there.

To contrast, many ANNs use a simple model of a neuron (input, weight, fire or don't fire), with a simplistic architecture based on a few layers. The learning algorithms have little to do with the neural analogy and much more to do with statistical optimization.

Re:Cargo Cult of the Neuroscience World (1)

ceoyoyo (59147) | more than 2 years ago | (#37143706)

They're not building perceptrons like you might for a high school science fair project. IBM has put considerable effort into cortical mapping, uses simulated neurons that exhibit spiking behaviour, simulates axonal delays, has made some effort at realistic synapses, etc. (http://www.almaden.ibm.com/cs/people/dmodha/SC09_TheCatIsOutofTheBag.pdf)

But wait... are you the original AC who was criticizing IBM for simply trying to copy the features of a brain without understanding it? Are you suggesting that IBM should try to copy the features of a brain MORE closely? Make the runway and fake radio antenna look better?

Re:Cargo Cult of the Neuroscience World (1)

bouldin (828821) | more than 2 years ago | (#37146088)

Thanks for the link, but it's still a pretty simple neural model. Just not as simple as many other common models, which is why they take great care to call it "biologically inspired." But, the focus of the research is on simulation, not intelligence.

To the original point, the researchers have simulated a better approximation of NNs without shedding any light on the "computational tricks" that make brains so smart. While the paper makes clear that this is a model that can be used to test neural theories, the goal of the project was to make a large scale simulation more feasible. The researchers did not claim to make better learning algorithms.

I disagree that we "have had a fair bit of success copying" the success of brains. Most machine learning these days (computer vision, etc.) does not even use neural models, and really reflects the amount of engineering effort and math that has gone into solving the problems. Some of the neural simulators I've read about have helped understand neural physics and chemistry, but not necessarily intelligence. We do, of course, have ANNs that accomplish very specific recognition or control tasks, but we do not have ANNs that, say, extract auditory features and build mental models of sound patterns.

And no, I am not the original AC. I generally agree with the last paragraph of that post (that we should look for what tricks make brains powerful instead of just copying neurons better), although I don't quite agree this is the cargo cult phenomon.

Re:Cargo Cult of the Neuroscience World (1)

ceoyoyo (59147) | more than 2 years ago | (#37147296)

A lot of the machine learning algorithms we use today are based on statistical or classification techniques that are mathematically connected to neural networks, and their development has in part been inspired by them. Many of our machine vision and hearing algorithms are based on phenomenon that have been observed in the brain's visual and auditory cortex. The differences of Gaussians in SIFT, or the wavelets in SURF for example.

Have we got a machine that wakes up one day, says hello and asks for a cheeseburger? No, of course not. That's kind of the end goal, isn't it? But we've come a LONG way in the bits and pieces, and claiming that progress is not due at all to studying how the brain does it shows a lack of understanding of the underlying principles.

Re:Cargo Cult of the Neuroscience World (1)

timeOday (582209) | more than 2 years ago | (#37131720)

No, actually I think there will be many good applications for this style of processing regardless of biologically accurate it is. Massive parallelism, co-locating data and computation, some analog computation perhaps... these are directions that computation is taking due to the breakdown of the Von Neumann architecture due to physical limits. Nobody expects (I hope) this new chip to compute anything that couldn't already be done - eventually - on a conventional desktop PC. But if it's possible to drastically increase computational efficiency (cost, power, size) people start thinking about applications and algorithms that were previously out of reach with the resources they had.

BBC Article On This (1)

lobiusmoop (305328) | more than 2 years ago | (#37130628)

IBM produces first 'brain chips' [bbc.co.uk]

Bonus geek points for spotting the error on this page.

Re:BBC Article On This (1)

NikeHerc (694644) | more than 2 years ago | (#37131056)

Bonus geek points for spotting the error on this page.

"... while the other contains 65,636 learning synapses."

Re:BBC Article On This (1)

MachDelta (704883) | more than 2 years ago | (#37131286)

Maybe an intern had an accident and, uh, "donated" his brain to science.

"Extra? What extra? It's always been designed with 65,636 synapses. No, that doesn't look like human tissue to me at all. Listen, who's the scientist here?"

Come to think of it, maybe the whole thing is made from interns' brains. It would definitely be cheaper.

Re:BBC Article On This (1)

_0xd0ad (1974778) | more than 2 years ago | (#37132040)

Has that been fixed, or did you mis-read it? The page currently states,

One chip has 262,144 programmable synapses, while the other contains 65,536 learning synapses.

262,144 = 2^18
65,536 = 2^16

pointless goal (0)

Anonymous Coward | more than 2 years ago | (#37130660)

Why settle with a chip equal to a human brain, seems like a pointless thing to do.

I'd at least make it comparable with a human brain, the size of a hangar or something even larger.

IBM is way behind (4, Interesting)

codeAlDente (1643257) | more than 2 years ago | (#37130766)

IBM has been working fast and furious ever since Kwabena Boahen showed them a chip (that actually was based on neural architecture) that matched the performance of their massive Blue Brain cluster, but used something like 5-10 W. Sounds like they're still playing catch-up. http://science.slashdot.org/story/07/02/13/0159220/Building-a-Silicon-Brain [slashdot.org]

Re:IBM is way behind (1)

Anonymous Coward | more than 2 years ago | (#37135402)

Actually, three of the lead researchers on this project are graduates from the Boahen lab and work for IBM creating this chip. They know the design decisions the put in place creating Neurogrid and are not behind in any sense compared to the work they had done with Neurogrid. The neuromorphic community is quite small and there is a fair amount of inbreeding. Qualcomm and UCSD are also working towards some medium to large scale hardware simulators but they are not out of fab yet.

Ok , its a neural net in hardware. Is this new? (1)

Viol8 (599362) | more than 2 years ago | (#37130776)

I'm sure this has been done before , or am I missing something here?

Re:Ok , its a neural net in hardware. Is this new? (0)

Anonymous Coward | more than 2 years ago | (#37130922)

Actually I think that's the point. Generally neural nets of any useful complexity are built in software and run on conventional computing clusters. There have been hardware ANN nodes produced, but I think this may be the first instance of a complete ANN of any useful complexity being manufactured in hardware.

Re:Ok , its a neural net in hardware. Is this new? (1)

chthon (580889) | more than 2 years ago | (#37131030)

That was indeed the first thing I thought about.

The basic functionality of neural networks have been long understood. I have at home an antique article (1963!) and schematic of an electronic neuron (with a couple of transistors).

One of the things Carver Mead was involved in the late 80's was the design of VLSI neuron structures.

So, no, this is not really new, but perhaps that with the larger integration, the IBM researchers could add better or more learning circuitry.

Re:Ok , its a neural net in hardware. Is this new? (2)

Dr_Ish (639005) | more than 2 years ago | (#37131568)

As best I can tell from the scant information in the article, this is merely a hardware implementation of standard neural network architectures. Many of these were described, as software implementations in the mid-1980s by Rumelhart, McClelland et. al. in their two volume work*Parallel Distributed Processing* [mit.edu] . Many of the putatively revolutionary features of this implementation, like on-board memory and modifiable connections are described. Since that time, neural network technology has advanced quite a bit, as can be seen by inspecting journals such as *Connection Science* [tandf.co.uk] , or *Neural Computation* [mitpressjournals.org] . So, despite all the hyperbole here, as best I an tell, this is not really news.

Re:Ok , its a neural net in hardware. Is this new? (1)

timeOday (582209) | more than 2 years ago | (#37131840)

In previous decades, alternative computing hardware never made sense economically. Sequential, digital, uniform memory access computers had been progressing so rapidly that special-purpose parallel/connectionist machines were obsolete almost before they hit the market. Now we are hitting the physical limits of the conventional architecture, which may create niches for alternate ones. (Arguably, GPUs already prove this point.)

Why try to simulate humans? (1)

bussdriver (620565) | more than 2 years ago | (#37131608)

Why aspire to simulate human brains? We create more than we need already...
Artificial Intelligence always beats real stupidity.

"We are all born ignorant, but one must work hard to remain stupid" -Ben Franklin

Re:Why try to simulate humans? (0)

Anonymous Coward | more than 2 years ago | (#37134340)

One word: immortality.

Re:Why try to simulate humans? (1)

jedwidz (1399015) | more than 2 years ago | (#37137576)

That's immortality for the machines, not for the people.

I'd agree that the biggest problem with human-level AI is that the economics of it are terrible. For your gazillions of research dollars and hours you get something that's already on tap and cheap as dirt, especially in the third world.

Even once you get your amazing smart supercomputer, you still have to train it (human employees are mostly good to go), house it (it'll be bigger than your average high-rise apartment), and feed it/cool it with enough electricity to power a neighborhood. Sure, with a lot more research it'll come down in size and power requirements, just maybe comparable to a real human brain, at which point it'll be all the more obvious how pointless the endeavor really was.

But wait, if you can equal a human brain, you can scale it up and transcend our intellectual limitations! That may be so, but we already routinely do that through collaboration, and with improvements in communication tech we're getting a lot better at it.

Or, if 'bigger brain' really beats the sum of lesser minds, we've barely even started on the possibilities for augmenting our biological ones, like with cybernetics, genetic engineering, smart drugs, maybe even stem cell therapies and brain grafts. There's got to be something there that beats building a whole new brain from scratch.

I'd lump the race for AI in with the space race as noble aspirations whose stated goal is to deliver solutions looking for problems. How bad are things going to have to get on Earth before your average Joseph would rather go live on Mars or Venus instead? For any solution that permits us to colonize space, there's a simpler, cheaper solution for letting us stay put.

Re:Ok , its a neural net in hardware. Is this new? (1)

mswhippingboy (754599) | more than 2 years ago | (#37132970)

I'm sure this has been done before , or am I missing something here?

No, this has not been done before. The neurons being implemented here are (to a limited degree) far closer in functionality to a "real" neuron than a conventional neural net (which isn't really close at all). This project is IBM's takeaway from the Blue Brain project of a couple of years ago. Henry Markram and Modha had a parting of ways over how the neurons were to be implemented. Markram wanted the neurons to be as biologically accurate as possible (at the expense of performance) while Modha felt they were close enough and that scaling up the quantities was what was important. Only time will tell who was correct in the long run.

Re:Ok , its a neural net in hardware. Is this new? (0)

Anonymous Coward | more than 2 years ago | (#37136804)

Yes its been done before
The main innovation here is spending big $$ to make an efficient and scalable silicon implementation
I am sure there are some nice inter-networking tricks there as well ...

Just like an SSL hardware implementation - nothing new, same as software, only x1000 faster

Of course precisely how useful or useless a 1bn neuron network is, remains to be seen

Just great (1)

Nanosphere (1867972) | more than 2 years ago | (#37130912)

I step away from the new PC for a minute and come back to find browser tabs open to newegg and the sound "awww yeah" coming from the speaker.

Re:Just great (1)

ColdWetDog (752185) | more than 2 years ago | (#37131590)

I step away from the new PC for a minute and come back to find browser tabs open to newegg and the sound "awww yeah" coming from the speaker.

Apparently, FTFA, if you stepped away from the PC, you would be more likely to find the browser tabs on "laser pointers" and "bulk catnip".

Re:Just great (1)

sycodon (149926) | more than 2 years ago | (#37132312)

Seriously though...I need it to sort and classify my porn collection.

I can simulate a cat's brain... (1)

nani popoki (594111) | more than 2 years ago | (#37131118)

all I need is a chip with a sleep timer. No other functions are required.

Re:I can simulate a cat's brain... (0)

Anonymous Coward | more than 2 years ago | (#37131972)

and some logic for the lick balls servo.

Why would I want a computer that ... (1)

Old97 (1341297) | more than 2 years ago | (#37131134)

provides random responses to input? I can imagine loading it with a bunch of facts and it ignoring all them while it launches into an angry rant and conspiracy theories. I get that at Slashdot already.

why do this? (1)

markhahn (122033) | more than 2 years ago | (#37131136)

it's a bit hard to understand what the point of this research is. if you actually want to understand neural behavior, simulations are obviously a better path: arbitrarily scalable and more flexible (in reinforcement schedules, etc). if the hope is to produce something more efficient than simulation, great, but where's the stats on fan-in, propagation delay, wire counts, joules-per-op, etc. personally, I find that some people simply have a compulsion to try to replicate neurons in silico - not for any reason, but just because it's their "thing".

worse is the media coverage that loves the very misleading analogy of neurons:transistors. they're actually very dissimilar, and the constraints each operates under are very different.

Re:why do this? (1)

bws111 (1216812) | more than 2 years ago | (#37132508)

TFA states what the goal is - running more complex software on simpler computers. It even says what the joules-per-op is - 45 picojoules per event, about 1000 times less than conventional computers.

Re:why do this? (1)

mswhippingboy (754599) | more than 2 years ago | (#37134128)

it's a bit hard to understand what the point of this research is.

The (unstated) point is that there is a race afoot to be the first to develop a system that will achive AGI.

For the first time ever, we've entered an era where we are beginning to see hardware powerful enough to perform large scale cortical simulations. Not simple ANNs, but honest to god, biologically accurate simulations of full cortical columns.

Having said that, Modha's penchant for jumping the shark is well documented. Rather than insisting on nothing less than biologically accurate neural circuitry (as Markram and his Blue Brain project were implementing), Modha has taken the approach that biologically "inspired" is close enough and that massive scalability will result in the fastest route to AGI.

I know of at least three major, well funded projects attempting to reach human scale brain emulation/simulation, each with their own degree of biological accuracy. As more and more impressive results are showcased, expect more investment in R&D to follow from others (MS, Apple, Oracle, Google) as they will not want to be left behind.

Buckle you seat-belts folks. I do agree with Modha on one point, we are on the precipice of the "dawn of a new paradigm".

Re:why do this? (1)

gfody (514448) | more than 2 years ago | (#37137406)

"Jumping the gun" [wikipedia.org]
"Jumping the shark" [wikipedia.org]

Re:why do this? (1)

mswhippingboy (754599) | more than 2 years ago | (#37137748)

Jump the shark is what I said and what I meant. Modha has a reputation for over-hyping capabilities in order to drive up interest and ultimately additional R&D dollars. This is fine until the reality does not match the hype and AI gets a black eye over it.

If these brains work like managements brains... (1)

Kungpaoshizi (1660615) | more than 2 years ago | (#37131560)

It won't even get off the ground, they'll spend too much "thinking" about them, and the workers will take the ideas to other companies that actually pay for their hard labor. IBM blows now.

But Why?!? (1)

sgt scrub (869860) | more than 2 years ago | (#37132188)

A microchip with about as much brain power as a garden worm...

They invented the Mother-in-Law?

AI (1)

SuperTechnoNerd (964528) | more than 2 years ago | (#37132548)

"We don't know who struck first, us or them. But we do know that is was us that scorched the sky"

Just what I've been waiting for... (0)

Anonymous Coward | more than 2 years ago | (#37132638)

A programmable pussy!

Well... (0)

Nabeel_co (1045054) | more than 2 years ago | (#37134162)

I, for one, welcome our new cat brained electronic overlords.

Hooray! (1)

Greyfox (87712) | more than 2 years ago | (#37136440)

Finally, something for my zombie processes to eat!
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...