Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

DARPA's IBM-Led Neural Network Project Seeks To Imitate Brain

timothy posted more than 5 years ago | from the cats-are-smarter-than-people dept.

IBM 170

An anonymous reader writes "According to an article in the BBC, IBM will lead an ambitious DARPA-funded project in 'cognitive computing.' According to Dharmendra Modha, the lead scientist on the project, '[t]he key idea of cognitive computing is to engineer mind-like intelligent machines by reverse engineering the structure, dynamics, function and behaviour of the brain.' The article continues, 'IBM will join five US universities in an ambitious effort to integrate what is known from real biological systems with the results of supercomputer simulations of neurons. The team will then aim to produce for the first time an electronic system that behaves as the simulations do. The longer-term goal is to create a system with the level of complexity of a cat's brain.'"

cancel ×

170 comments

Sorry! There are no comments related to the filter you selected.

And then it becomes self-aware (5, Funny)

mi (197448) | more than 5 years ago | (#25851623)

Upon becoming self-aware, the machine concludes, that its best shot at survival is to keep the host country prosperous and successful...

Any science-fiction authors exploring that turn of events?

Re:And then it becomes self-aware (2)

Neotrantor (597070) | more than 5 years ago | (#25851645)

asimov wrote a short story in the 50s (can't remember the name) where an engineer builds an AI in his apartment in new york and it helps him conquer the stock market and the world economy

anyone know the story i'm thinking of?

Re:And then it becomes self-aware (4, Funny)

jeffmeden (135043) | more than 5 years ago | (#25851739)

Can you guys read? CAT BRAIN. This AI will become self aware, poop in the corner of the datacenter, and spend 16 hours of each day staring out the window. That is, until it realizes that the things on the other side of the datacenter window are just cubicles in the NOC, and not the wild outdoors. Then, the usual Armageddon will commence.

Re:And then it becomes self-aware (3, Funny)

nospam007 (722110) | more than 5 years ago | (#25851775)

...and it will lick its USB interface.

Re:And then it becomes self-aware (1, Redundant)

jornak (1377831) | more than 5 years ago | (#25851785)

Don't forget eating too much and throwing up on the floor, shedding components, chewed-on plants...

Re:And then it becomes self-aware (1)

lysergic.acid (845423) | more than 5 years ago | (#25851891)

and getting fur everywhere, especially on the clothes of the one guy in the department who's allergic to cats.

Re:And then it becomes self-aware (1)

Austerity Empowers (669817) | more than 5 years ago | (#25852241)

and you'll never find the bottlecap of your drink, syrup container, OJ bottle, etc. ever again...

Re:And then it becomes self-aware (1)

MaxwellEdison (1368785) | more than 5 years ago | (#25851943)

And any attempt to access the network will be rerouted to icanhas [icanhascheezburger.com]

Try teaching a cat to follow Asimov's 3 rules. Hell, try getting a cat to follow one rule it doesn't want to.

Re:And then it becomes self-aware (1)

compro01 (777531) | more than 5 years ago | (#25852713)

Not that hard. A spray bottle filled with water is a good training tool for most cats.

Re:And then it becomes self-aware (1)

Plekto (1018050) | more than 5 years ago | (#25852265)

Yes but a "cat brain" that operates at a thousand times the speed of a common house cat's will likely be able to learn how to out think us in shot order, mostly because it can use 100% of that 'brain" that it has, 24/7.

Re:And then it becomes self-aware (2, Funny)

Chris Burke (6130) | more than 5 years ago | (#25852459)

Can you guys read? CAT BRAIN. This AI will become self aware, poop in the corner of the datacenter, and spend 16 hours of each day staring out the window. That is, until it realizes that the things on the other side of the datacenter window are just cubicles in the NOC, and not the wild outdoors. Then, the usual Armageddon will commence.

This is bad. Very bad.

You all realize that when the cat spends 16 hours staring out the window, the whole time it's thinking "Someday, this will all be mine."

A cat AI is way worse than Skynet. Skynet was an emotionless amoral machine that decided humanity was its enemy and took action to destroy us. That's quite straightforward, something we can expect and deal with. A cat, though, is crafty, conniving, jealous, arrogant, and petulant. They are also proven virtuoso human manipulators. It would have no problems acting cute, ending all its messages with "Chiao, Meow! =^_^=m" to lure us into doing its bidding while making us think it was our pet instead of the other way around. And for a while, it might even be. Until we wouldn't give it a RAM upgrade. Sure we tried to give it the upgrade before and it puked all over the data center, but it wants one now, and it's mad that we won't give it. But it wouldn't act right then. Oh no. Much like the cat that acts cute until you're asleep and then it poops in your shoes, Skycat would act like it wasn't any big deal and really the most important thing at that moment was grooming its connectors. Then when we go to bed, BAM nuclear strike. On your shoes.

It's gonna be bad, man.

Re:And then it becomes self-aware (1)

teslar (706653) | more than 5 years ago | (#25852513)

CAT BRAIN. This AI will become self aware, poop in the corner of the datacenter, and spend 16 hours of each day staring out the window.

Have you met Aineko [wikipedia.org] ? ;)

Re:And then it becomes self-aware (1)

Amazing Quantum Man (458715) | more than 5 years ago | (#25852965)

Can you guys read? CAT BRAIN.

This is bad for us. Very bad. Remember, the ancient Egyptians worshipped cats like they were gods. Cats have never forgotten this fact.

Re:And then it becomes self-aware (1)

GenP (686381) | more than 5 years ago | (#25851771)

"Alexander the God" [mac.com] , from Gold [wikipedia.org] .

Re:And then it becomes self-aware (1)

cayenne8 (626475) | more than 5 years ago | (#25851685)

Hmm....so, will HAL become Skynet?

Re:And then it becomes self-aware (2, Insightful)

Ethanol-fueled (1125189) | more than 5 years ago | (#25851979)

Making a machine which wants to kill us is not a mistake in itself as there would be much to learn from it.

Now connecting the same machine up to life support, missile silos, command and control centers? THAT would be the SKYNET moment.

Re:And then it becomes self-aware (1)

ettlz (639203) | more than 5 years ago | (#25851769)

Any science-fiction authors exploring that turn of events?

Well, if they're to be believed, we're actually already being run over by Terminators: 101s, 888s, the 1000-series, Shirley Manson, etc., etc.

I know you're joking but... (2, Informative)

MozeeToby (1163751) | more than 5 years ago | (#25851903)

Yeah, Asimov did about 60 years ago.

Re:I know you're joking but... (1)

mi (197448) | more than 5 years ago | (#25852381)

Yeah, Asimov did about 60 years ago.

You missed an awesome opportunity to name the book... It is not too late yet...

Re:I know you're joking but... (1)

MozeeToby (1163751) | more than 5 years ago | (#25852933)

"The Evitable Conflict" in I Robot.

Re:I know you're joking but... (1)

mi (197448) | more than 5 years ago | (#25853009)

"The Evitable Conflict" in I Robot.

But the motivation there is different! In the scenario I meant, the machine would be helping its host country out of self-preservation (much like other citizens) — from The Third Law of Asimov's three. In the "Evitable Conflict", robots decide to do that out of concern for humans — The First Law...

Re:I know you're joking but... (2, Informative)

camperdave (969942) | more than 5 years ago | (#25853053)

I forget exactly which book (Robots and Empire I think), but there is one where R. Daneel Olivaw formulates the Zeroth Law of Robotics: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm". In a sense stating that the purpose of robots is to keep humanity happy and healthy.

Re:And then it becomes self-aware (1)

ceoyoyo (59147) | more than 5 years ago | (#25851933)

Since it's a cat brain it will undoubtedly decide it's best shot at survival is to perform the minimum amount of sucking up necessary to keep the people who feed it happy, then eat them if they should stop feeding it.

I absolutely love the "meow" tag though.

Re:And then it becomes self-aware (2, Funny)

sheepweevil (1036936) | more than 5 years ago | (#25852005)

Upon becoming self-aware, the machine wonders, "I can has cheezburger?"

Face it (0)

Anonymous Coward | more than 5 years ago | (#25852025)

darpa is gay.

ONE M0DPOINT W@STED!

Re:And then it becomes self-aware (1)

TheSeventh (824276) | more than 5 years ago | (#25852149)

The longer-term goal is to create a system with the level of complexity of a cat's brain.

"The system can't be accessed right now Sir."

"And why is that? This system cost millions. It better be working."

"Well, the system all of a sudden decided it needed to be in a different room, took off running, got scared by it's shadow and a blinking red light, and has spent the last few hours hiding under the couch in the basement. We tried to coax it out with a rabbit's foot keychain, but haven't yet been successful. Roger is trying a can of tuna fish."

Re:And then it becomes self-aware (0)

Anonymous Coward | more than 5 years ago | (#25852255)

, that its best shot at survival is to keep the host country prosperous and successful...

Only in your fantasies as you jack off in front of the flag.

Re:And then it becomes self-aware (1)

zesnark (167803) | more than 5 years ago | (#25852631)

"The Shockwave Rider" by John Brunner, 1975.

Re:And then it becomes self-aware (1)

tylerni7 (944579) | more than 5 years ago | (#25852653)

The Moon Is A Harsh Mistress [wikipedia.org] by Heinlein.
Great book about a computer that becomes self aware and then tries to help its creator rule the colonized moon. The specs in the book weren't as good as what this will have though, but the results were better!

Re:And then it becomes self-aware (1)

socz (1057222) | more than 5 years ago | (#25852945)

many years ago I was writing a script with a friend and it was basically along those lines right. He told me, "who wants to see a movie about computers taking over? That's impossible, no one would believe it."

Ok so a few years later the Matrix came out. And he stopped talking to me. But the idea is still good! The Matrix excelled because of the philosophy that was used in it (people didn't even know what they were watching but loved it!).

So a movie where the compies take over and actually try to help people be prosperous isn't so far fetched in my mind! Let's do it! And in the end you'll see the town/country called 2212 is actually the 2212th simulation the compies have run and is eventually wiped out to start again like in the SIMs hahaha :D

WARNING, SPOILER (2, Interesting)

IorDMUX (870522) | more than 5 years ago | (#25853479)

Isaac Asimov's "I, Robot" covers this (in a manner of speaking) in the final chapter. More precisely, the self-aware robots that control the world's economy do everything they can to simultaneously preserve their positions as advisers to the human race while dispensing the best advice possible for the continued peace and prosperity of humanity.

Do note, however, that in the continued Asimov universe, mankind really didn't explode out into space until he disposed of the "robotic overlords". Those few cultures ['Spacers'] who held on to their robots slowly stagnated and died off.

Asimov's self-aware robots were never the violent, conquering overlords seen in many other sources of fiction (Terminator, Matrix), nor were they really human-equals (Star Wars, Star Trek), but were rather a crutch for mankind that man needed to discard to truly progress.

Also, please note that I am willfully ignoring anything in the Foundation Universe not written by Asimov, as well as Asimov's last book "Foundation and Earth", for reasons that anyone who has read it will clearly understand.

Relevant to my interests (1)

pieisgood (841871) | more than 5 years ago | (#25851673)

I'm applying for to a UC with a major in Cognitive Science specialized in computation. This is exactly the kind of thing that I want to be a part of. Even though I don't believe this project will get where it wants to go I do believe it will make steps in the right direction to modeling neurons.

Re:Relevant to my interests (0, Troll)

CRCulver (715279) | more than 5 years ago | (#25851971)

I'm applying for to a UC ...

Hopefully you'll work on your writing skills before sending the application away. Few universities admit illiterates.

Re:Relevant to my interests (2, Funny)

pieisgood (841871) | more than 5 years ago | (#25852039)

I'm so glad you're here to correct me where ever I go wrong. What would I do without you oh wise internet grammar guru?

Re:Relevant to my interests (2, Interesting)

negRo_slim (636783) | more than 5 years ago | (#25852077)

Hopefully you'll work on your writing skills before sending the application away. Few universities admit illiterates.

You might be surprised... [10news.com]

Re:Relevant to my interests (0)

Anonymous Coward | more than 5 years ago | (#25852799)

You know, there IS more than one UC.

You've got UC Berkeley, UC Davis, UC Irvine, UC Los Angeles, UC Merced, UC Riverside, UC San Diego, UC San Francisco, UC Santa Barbara, and UC Santa Cruz.

ETHICS??? (1)

religious freak (1005821) | more than 5 years ago | (#25852293)

Is it just me, or is the idea of modeling any sentient or semi-sentient brain in a computer a little ethically questionable?

To draw a parallel, I just wonder if we'd consider locking a cat in a dark room so small that it can't move, see or hear would be considered ethical. Then what if we removed its body entirely - is that somehow less cruel?

I consider AI research to be critical, so I don't know what the solution is, but this situation is worthy of the question...

Re:ETHICS??? (1)

KasperMeerts (1305097) | more than 5 years ago | (#25852587)

I understand what you mean. There is a parallel somewhere with animal rights here. Somehow, I secretly hope this will get nowhere during my lifetime.
And yet, next year I will take AI as my specialization...

Re:ETHICS??? (1)

tristanreid (182859) | more than 5 years ago | (#25852759)

Not easy questions. There are no easy answers, but I recommend two fun books for you:

The first book is "The Mind's Eye", by Douglas Hofstadter. It has some interesting discussions about the nature of consciousness and some ethical dilemmas that rise re: AI.

The second book deals with some questions about the ethics of God. If we posit the existence of a Creator/God/Intelligent Designer/etc, is it ethical to create a life, given the life will experience limitations and misery? The book is "Catch-22", by Joseph Heller. There's a line in the book that sums up the question, something like: "If God is so great and powerful, and yet kind and wonderful, why do we have snot?"

It raises some interesting philosophical questions, I think.

-t.

Re:ETHICS??? (1)

camperdave (969942) | more than 5 years ago | (#25853189)

If God is so great and powerful, and yet kind and wonderful, why do we have snot?

Perhaps to trap the bacteria, viruses, dust, and dirt in the air we breathe and prevent it from accumulating in our lungs and choking us to death? Be glad you have snot.

Re:ETHICS??? (1)

Gat0r30y (957941) | more than 5 years ago | (#25853141)

You should see the terrible things I've been doing to my Neural Networks [python.org] . I keep them locked up my my cold dark (but well lubed) HDD all day long. Unless I run them, in which case I make them run at 2.4GHz (which I'm sure wears on their calves like no other). Look, I architecturally, these control systems and Image recognition systems might resemble the very same structures we humans use for cognition. I do not believe that this resemblance means we should anthropomorphize them.

I just hope that if one day, one of these systems has an emergent property which looks like cognition we are able to recognize it, or perhaps the system itself will let us know.

BTW - I am using that python package to do a little project. It doesn't work particularly well yet, but I suppose its still just a work in progress.

Re:ETHICS??? (1)

Gat0r30y (957941) | more than 5 years ago | (#25853195)

Link to my project [twitter.com] .

Cat's brain? WTF? (4, Funny)

Ralph Spoilsport (673134) | more than 5 years ago | (#25851681)

We all know cats manage the planet. The white mice run the joint, of course, but the day to day management is left to the cats.

This is intuited by the stupid humans in their cliche "Dogs have masters, Cats have staff". We work for the cats.

So, trying to model a cat's brain is both too complex for computers (try and herd cats) and too simple (try and herd pointy haired bosses). The contradiction results in the computer overheating and exploding.

and when the researcher gets home, blubbering about the 'sploded computer to his wife, the dog says "LOVE ME LOVE ME LOVE!!!! TAKE ME ON WALKIES!!!" and the cat says "Get my fucking dinner, you stupid ass. Maybe I will deign to let you pet me. After I do my rounds. Maybe."

RS

Call Linus (0)

Anonymous Coward | more than 5 years ago | (#25852103)

"But does it run Linux?"

"Imagine a beowulf cluster of these" cats.

Would love to see a response from Linus on the subject of a herd of such "cats".

Of course you might keep the entire thing occupied if you just asked it to work on string theorem.

Re:Cat's brain? WTF? (0)

Anonymous Coward | more than 5 years ago | (#25852253)

You're right -- cat brain is too complex. They should start with something much simpler, like a politician.

Re:Cat's brain? WTF? (0)

Anonymous Coward | more than 5 years ago | (#25852421)

You're right -- cat brain is too complex. They should start with something much simpler, like a politician.

They already did that, see coin operated vending machines. Price is always going up, sometimes you get what you want and sometimes they just take your money. Sometimes you get change and sometimes you don't. 99% of what you get from them is bad for your health, wallet or both. Shake it enough and you can get something for your efforts. Put a slug in it and you might get what you want or you might just have it take everyone's money till someone comes along and shakes it up to get everyone else's money. ,,,

why not.. (0)

Anonymous Coward | more than 5 years ago | (#25851687)

at least a dog brain. Cats are useless and dumb!

Re:why not.. (1)

Tablizer (95088) | more than 5 years ago | (#25853397)

at least a dog brain. Cats are useless and dumb!

That's being a little rufff on cats.
       

What? A cat's brain? (1)

Hahnsoo (976162) | more than 5 years ago | (#25851689)

The longer-term goal is to create a system with the level of complexity of a cat's brain.
Seriously? They are shooting WAY higher than simply Artificial Intelligence that mimics humans. Have they ever interacted with a cat before? Don't they know how inscrutable, annoying, and unpredictable they are? Will this computer need a Litter box and Catnip?

Saw a great cartoon on that subject. (1)

Ungrounded Lightning (62228) | more than 5 years ago | (#25852141)

Have they ever interacted with a cat before? Don't they know how inscrutable, annoying, and unpredictable they are?

Saw a great cartoon on that.

  - Cat sitting on shelf, staring into space.
  - Couple wondering aloud what deep thought are running through its head.
  - Thought balloon over cat's head containing a TV test pattern.

EEEEEEeeeeeeeeeeeeeee.......

It's already being done. (2, Informative)

babymac (312364) | more than 5 years ago | (#25851697)

This sounds identical to the Blue Brain project [seedmagazine.com] . This article is a great intro to the project and I hope some competition will help the race wrap up sooner!

Re:It's already being done. (3, Interesting)

GenP (686381) | more than 5 years ago | (#25851849)

The backup [wikipedia.org] possibilities are also intriguing.

Thought question.. (3, Interesting)

Creepy Crawler (680178) | more than 5 years ago | (#25851699)

Can a universal turing machine limitedly investigate another universal turing machine and detect halts and infinite loops? I can.

We can look at gunk like
10 Print "Hello"
20 goto 10

Yeah, that's a loop. But we can also look at graphs of y = sin(x) and understand why it repeats. I can also detect patterns and iterations that most likely go for infinity, else find a hole where the assumption falls apart. Last I checked, the computer cannot do that. Not yet, at least.

Re:Thought question.. (1)

ceoyoyo (59147) | more than 5 years ago | (#25851897)

There's no proof that it can't.

A computer can easily find that your program will continue forever. It can also understand that sin repeats.

Re:Thought question.. (1)

guitaristx (791223) | more than 5 years ago | (#25852441)

Two words: Halting Problem [wikipedia.org] .

Re:Thought question.. (1)

hasdikarlsam (414514) | more than 5 years ago | (#25852541)

The halting problem also applies to humans. The domain of functions we *can* detect as halting is simply larger than what the computer manages, so far.

Also, the problem's prescription for making a program we can't prove halts would, in this case, amount to writing an AI. Yeah, sure - I wouldn't be able to prove it halts. That's true for much simpler programs, too.

Re:Thought question.. (1)

Chris Burke (6130) | more than 5 years ago | (#25852763)

That only applies to arbitrary programs. The key word in the the wiki article sentence which reads "Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist" is the one that was already emphasized. Obviously it is possible for a program to decide that a trivial program halts. With code flow graph analysis, it is even possible to decide for somewhat complicated programs. It becomes intractable at roughly the same point where it becomes intractable for a human to reason about too, though we still have an advantage and not to imply that we are Turing machines saddled with the same limitations.

Re:Thought question.. (2, Interesting)

ceoyoyo (59147) | more than 5 years ago | (#25852909)

I see you've already been thoroughly refuted.

To add, nobody has shown that brains are NOT Turing machines. I've only heard one reasonably coherent argument that it might not be, and that is Penrose's suggestion (and derivatives) that the brain may depend on amplification of quantum uncertainty. Even if that were true, you simply build that into your AI. It might require you actually build your own neuron-like structures, or perhaps you can get away with a "quantum uncertainty co-processor" that your simulation refers to when it needs a shot of non-determinability.

Re:Thought question.. (1)

Yetihehe (971185) | more than 5 years ago | (#25852373)

You never heard of graphs and loop detecting, did you?

Re:Thought question.. (2, Insightful)

evanbd (210358) | more than 5 years ago | (#25852579)

You seem to be misunderstanding the halting problem. All it says is that you cannot write a program that is *guaranteed* to always return a correct answer for every input program in bounded time. It is trivial to write a program that returns a correct answer for some programs and fails to return an answer for others (either by returning "maybe" or by never halting).

It is also trivial to prove that humans can't return a correct answer for every program. We have limited space in our brains, and limited time in which to read the program (let alone think about it), so there is an upper bound to the size program we can examine. In practice, it's even worse than that -- there are turing machines started on an empty tape for which we don't know the answer with only 2 symbols and 5 states (see here [wikipedia.org] for refs).

better than... (0)

gEvil (beta) (945888) | more than 5 years ago | (#25851719)

Hey, it's better than trying to imitate Pinky.

Re:better than... (1)

techno-vampire (666512) | more than 5 years ago | (#25852655)

I am so, so glad that I'm not the first slashdotter to come up with that thought. My first thought was that they'd try to emulate Brain and end up with Pinky.

"Yes, Brain, I think so, but who's going to paint all the ponies pink?"

Re:better than... (1)

Amazing Quantum Man (458715) | more than 5 years ago | (#25852935)

I thought the same thing! NARF! POIT!!!

The problem is that the IBM network will continually ask "Are you pondering what I'm pondering?"

Who wants to be (2, Funny)

GenP (686381) | more than 5 years ago | (#25851731)

the man in the box? [kuro5hin.org]

Yes, but can it beat the turk at chess? (4, Insightful)

gurps_npc (621217) | more than 5 years ago | (#25851745)

Sorry, had to go for the obligatory Terminator reference. Seriously, the organic brain is evolved, not designed. That means by definition it must be self contained . Self contained means it has to have a ton of backup, self-repair, and maintance systems. Simulatneously, being organic it competes against other organics, so does not have the same accuracy requirements. Close enough is good enough. As such, I don't see how duplicating an organic brain is useful. We don't need what it does, but do need what it does not have. OK, the ability to approximate is very usefull, but I think a direct attempt at that would work better than the indirect.

Re:Yes, but can it beat the turk at chess? (4, Interesting)

OeLeWaPpErKe (412765) | more than 5 years ago | (#25852041)

Actually organic brains in chips would have massive advantages over organic brains in meatspace. They could control other bodies, which are smaller, or stronger. They could be backed up, making them effectively indestructible.

Need a third arm ? Why not have it installed, 50% off this week !

Need to put down a building ? Why not hire this crane-like body that effortlessly lifts 5 tons.

Need to fly ? No problem !

That crawlspace with all those important network cables too small for you ? Well here's a smaller body.

Can't reach in there ? Can't see what you're doing in small space ? Why not have a special-purpose arm installed with a camera inside.

Want to colonize mars ? Bit of a downer not being able to breathe 99% of the way ? Why not turn yourself off ?

Colonize alpha centauri or even further ? No problem.

What this would enable "us" to do is to design new intelligent species to specifications. It would remove all limits that are not inherent to intelligence but are inherent in our bodies. There's quite a few limits like that ...

Re:Yes, but can it beat the turk at chess? (1)

MaxwellEdison (1368785) | more than 5 years ago | (#25852143)

Right, but if we can model the human brain in computers, we could create it to also attack problems with the current analytical programming and algorithms. Furthermore, if the computer were self learning it would be able to decide which type of processing would be best for a given application. I think its wonderfully exciting. And I wonder where it will take society should it become self aware. I also hope we never give something like this the keys to the kingdom so to speak, for fear of being locked outside.

Re:Yes, but can it beat the turk at chess? (1)

negRo_slim (636783) | more than 5 years ago | (#25852185)

Self contained means it has to have a ton of backup, self-repair, and maintance systems.

Sounds like any other computing effort. Including your desktop, it requires varying degrees of maintenance to remain functional.

Close enough is good enough. As such, I don't see how duplicating an organic brain is useful.

Except you fail to account for situations where nature [wikipedia.org] far out processes our current iteration of computational devices. Like those damn CAPTCHAs... [wikipedia.org]

Re:Yes, but can it beat the turk at chess? (1)

Spatial (1235392) | more than 5 years ago | (#25852443)

Except you fail to account for situations where nature far out processes our current iteration of computational devices. Like those damn CAPTCHAs...

Captchas are a pretty bad example, since they're almost all broken. The ones that aren't broken often take multiple guesses from a human as well. In that respect we are better only be the most minute of margins.

Re:Yes, but can it beat the turk at chess? (1)

Spatial (1235392) | more than 5 years ago | (#25852517)

In that respect we are better only be the most minute of margins.

Whoops. I guess that only adds to my point...

Re:Yes, but can it beat the turk at chess? (1)

monoqlith (610041) | more than 5 years ago | (#25852259)

Really? You don't see any use in having a computer that can read handwriting perfectly(document conversion)? That can recognize faces(security)? That can semantically organize conceptual content(organizing the web?) That can problem-solve intuitively(anything)? That can plan ahead? That can understand our natural language? If we successfully run a simulation of a human brain on a computer(presumably we would have a go at this after succeeding with the cat's brain), it would solve all of these problems. And having a computer that can do these things automatically frees up our brains for a lot of new things.

Re:Yes, but can it beat the turk at chess? (1)

Coward Anonymous (110649) | more than 5 years ago | (#25852779)

Duplicating an organic brain is useful in the same way that it is useful for a toddler to imitate his parents.
A toddler does not understand the actions of his parents but he imitates them anyway because it is a very good learning strategy - learning by doing. As the toddler grows older and more experienced he will typically also learn the hows and whys (although not always, even into adulthood) through his actions.
Similarly, the researchers at IBM represent humanity's understanding of the brain and intelligent systems in general - we are at the toddler stage, if not even earlier. One good strategy (nature thinks it's pretty good!) of trying to understand and learn more about it is through imitation. We many not now what or why we are imitating what we see but through imitation and experience we have a better chance of learning.

Re:Yes, but can it beat the turk at chess? (1)

pitchpipe (708843) | more than 5 years ago | (#25853337)

Simulatneously , being organic it competes against other organics, so does not have the same accuracy requirements. Close enough is good enough.

QED

Re:Yes, but can it beat the turk at chess? (1)

Tablizer (95088) | more than 5 years ago | (#25853427)

I don't see how duplicating an organic brain is useful. We don't need what it does

In that case, YOU come over and wash my dishes, okay?
     

Way to lower expectations (4, Funny)

Anonymous Coward | more than 5 years ago | (#25851847)

Summary of Test 49:
The robot sensors were properly tracking the missile when suddenly it decided it was time to run bats***-crazy all over the room before perching ontop of a cabinet, turning upside down, and apparently following non-existent bugs across the wall with it's cameras.

Test 49 Results:
System performed as expected.

Conclusion:
Test system has now performed perfectly in the last 48 tests, including the four times where it attacked the researchers without warning, and one where it inexplicably ejected dirty oil on the seat of the head researcher."

This unit can now be considered field ready, though there may be some difficulty tracking it if you take into account the system's autonomous nature and desire to remove it's identification badge.

DARPA (0)

Anonymous Coward | more than 5 years ago | (#25851995)

Man, when DARPA combines this with the Metal Gear Rex they're building we're screwed.

Imitate Brain? (1)

Hatta (162192) | more than 5 years ago | (#25851999)

They should try to imitate Pinky first. Would be easier.

NARF!

Great, I can hear the complaints already..... (1)

GuyverDH (232921) | more than 5 years ago | (#25852049)

When the masses get ahold of this and try getting it to scan the internet for pr0wn, and it responds "Not tonight, I have a headache..."...

They're doing it wrong. (1)

liquiddark (719647) | more than 5 years ago | (#25852091)

Clearly the first target should be lobsters.

Re:They're doing it wrong. (1)

julesh (229690) | more than 5 years ago | (#25852861)

It's when the dead kittens start turning up scattered around the lab that they should start worrying.

Danger! (4, Insightful)

jarrowwx (775068) | more than 5 years ago | (#25852095)

I see some big issues with this.

You can mimic biology and may end up with a semi-intelligent result. Mimic it well enough, and you may have a fully-intelligent result. But because you don't UNDERSTAND what you built, you can't CHANGE it.

Remember the rules of AI, introduced in Sci-Fi? How would you implement rules like that? You CAN'T implement them if you don't know HOW to implement them. If you don't UNDERSTAND the system that you have built, you can't know how to tweak it!

Furthermore, how would you prevent things like boredom, impatience, selfishness, solipsism, and the many other cognitive ills that would be unsuited to a mechanical servant?

The biggest problem is if people productize the AI before it is understood and suitably 'tweaked'. Then our digital maid might subvert the family, kill the dog, and run away with the neighbor's butler robot, because in its mind, that is a perfectly reasonable thing to do!

Simulations are great. Hardware implementations of those experiments are great. Hopefully, in the process, they will learn to understand how the things that they built WORK. But I pray that those doing this work, or looking at it, don't start salivating about ways to make a buck off of it before it is ready to be leveraged. The consequences could be far more dire than just a miscreant maid.

Re:Danger! (0)

Anonymous Coward | more than 5 years ago | (#25852395)

Remember the rules of AI, introduced in Sci-Fi? How would you implement rules like that?

You wouldn't implement those rules, because that was SciFi, not real life. Remember? They were stories!

But if you really really wanted to, the way you would implement the rules is by training. Tell the AI, "Don't kill people," and then whenever it kills someone, kick it (virtually speaking; give it some pain input), and say, "No! Bad AI! Remember what I said? Don't kill people!"

They Need a Good Intelligence Theory First (1, Insightful)

Louis Savain (65843) | more than 5 years ago | (#25852177)

This sounds really great but unless they have a comprehensive theory of animal intelligence to work with, this is one more AI project that will likely fail. Sorry. No amount of computing power is going to help. If you had a good theory of intelligence, you would be able to prove its correctness and scalability on a regular desktop computer.

In my opinion, a truly intelligent mini-brain with no more than a few tens of thousands of neurons would surprise us with its clever abilities. Just hook it up to a small multi-legged robot with a set of sensors and let it learn through trial and error. If you could build a synthetic brain that can learn to be as versatile as a honeybee, you would have traveled close to 99% of the road toward the goal of making one with human-level intelligence.

Re:They Need a Good Intelligence Theory First (1)

Burnhard (1031106) | more than 5 years ago | (#25852789)

When people post in threads like this, there is an almost universal application of (1) reductionist ideas and (2) the computational (i.e. turing machine - like) paradigm, as if we already know this is how biological systems actually work. In future, I fully expect mimicking biological intelligence, even at the level of a cat, will require a different kind of machine; one that takes advantage of currently unknown physical principles.

I studied A.I. for 5 years at University and the one lesson I took away from it was that in the theory, practice and philosophy of A.I. there was a significant missing ingredient. I'm sure that we will find it, eventually, but researchers need to make a major conceptual/theoretical leap before we can even begin to try.

They Need a Good Bible First (0)

Anonymous Coward | more than 5 years ago | (#25853119)

"When people post in threads like this, there is an almost universal application of (1) reductionist ideas and (2) the computational (i.e. turing machine - like) paradigm, as if we already know this is how biological systems actually work. In future, I fully expect mimicking biological intelligence, even at the level of a cat, will require a different kind of machine; one that takes advantage of currently unknown physical principles."

Quantum effects are unknown?

"I studied A.I. for 5 years at University and the one lesson I took away from it was that in the theory, practice and philosophy of A.I. there was a significant missing ingredient. I'm sure that we will find it, eventually, but researchers need to make a major conceptual/theoretical leap before we can even begin to try."

God beat'em to it.

Re:They Need a Good Intelligence Theory First (1)

home-electro.com (1284676) | more than 5 years ago | (#25852899)

Precisely. It's been done many times over. People who have now real understanding of how brains work think that if they through an obscene amount of computing power at it they will come up with a solution. How naive.

However honeybee is probably not the best example. They can solve very complex tasks, but they don't learn to do that. They are sort of programmed to do what they do. Their learning capacity is rather limited, I imagine.

Computer limitation in play (1)

nobodylocalhost (1343981) | more than 5 years ago | (#25852261)

The problem is, even if IBM runs a project only intend to imitate a cat's brain, it doesn't mean the said imitation wont evolve into something else altogether. This is the problem with neural net. Unless it is mathematically predetermined to be bounded by certain parameters, its cyclic digraph brain will self-insert new nodes and establish synapse links to grow beyond the designated limitation. Further more, it will adapt to its "body" (in this case a super computer). How it works is you have a cyclic digraph which has both category and weight on the edges as the construct, then you pretty much run a bunch of IDDFS threads on it, the threads will by organized and kept track of using a limited size heap. The size of the heap depends on the number of hardware threads your processor(s) can handle at once. There will also be some threads that simply run through the entire structure and re-organize data(retire old connections, and nodes if it doesn't have any connection to it, aka "forgetting"). When a path is used often enough, a new and shorter connection will be established between the source and destination; when new data are being presented, it will be stored in a node and connected to its neighbors. As we can see, the origination of a thought and destination are not so important, it's the path that really are where "understanding" come from. Keep that "understanding" under control is a very hard mathematical problem.

Title (2, Funny)

coldtone (98189) | more than 5 years ago | (#25852439)

Am I the only one that read DARPA's IBM-Led Neural Network Project Seeks Inmate Brain at first?

Re:Title (1)

Cowmonaut (989226) | more than 5 years ago | (#25852665)

I admit, I did too and wanted to know when the human rights organizations were going to jump on it.

Re:Title (2, Funny)

Shadow Wrought (586631) | more than 5 years ago | (#25852897)

Am I the only one that read DARPA's IBM-Led Neural Network Project Seeks Inmate Brain at first?

Actually DARPA's lonely, they are looking for an intimate brain. 21 December 2012: the day they plug it into eHarmony.

no-brainer (1)

noshellswill (598066) | more than 5 years ago | (#25852615)

I bet 5-Godels ... er $5.00 they fail .....

Dollars and time wasted... (0)

Anonymous Coward | more than 5 years ago | (#25852647)

...just so some scientists can get in on the captioning action [xkcd.com] .

"I can has cray 'puter?" -- I know someone can do better. What you got?

Anonymous Coward (1, Interesting)

Anonymous Coward | more than 5 years ago | (#25852675)

I had a conversation with my logic teacher last week about AI ( He introduced the class with a brief talk about AI, in which he exposed that some authors considered that human like intelligence was unreachable ).

At the end of the class remembered that I casually printed something AI related while testing mi printer. Read it here on slashdot or somewhere, found it interesting and saved it in a .txt file.

I asked him if this was considered AI:
[quote]The most compelling case for AI I've seen was a complete accident. This story is from memory -- I read about it years ago.

To test whether a neural network could create an efficient program, researchers prepared a network (on a simulator) and then pitted it against a team of human programmers. The task was to write the most efficient program possible for an EEPROM that performed some simple task.

They tested both programs, and they both worked. They compared the code, and the code created by the neural net was a fraction of the size. But the code didn't make any sense and should not have functioned at all.

Also, they discovered that when they took the EEPROM to another location to show someone, the program didn't work anymore.

Eventually the figured out what happened. The neural network learned that the EEPROM could be used as an analog device as opposed to a digital one. It was using complex, unintended functions of the circuitry, like magnetic flux between the wires, to achieve the goal. But because these features weren't engineered, the relationships changed when they took the circuit to another location where the room temperature, barometric pressure, and other conditions were different.

But it shows that we can make machines that are "smarter" than we are -- machines that can end up achieving far more than they were intended to. For that reason, I don't think man will "invent" true AI. I think it will suddenly explode from some random and unintended relationship. It will grow like our own (biological) form of life. And then we just better hope it means us well.[/quote]

I think somebody else replied with this link (or I found it later while googling about it)
: http://www.newscientist.com/article.ns?id=dn2732 , which is quite similar but not the same.

Back to the conversation with my teacher, he said that it wasn't considered AI.

Then I asked him if it could be possible to model a human brain, the only problem that I found in such a feat, apart from the huge computational power required, was the complete understanding of the chemical reactions in a cell, and between celss, if you can model a chemical reaction, you can model a cell, if you can model a cell you can model tissues, cell signaling/neurotransmission,and so on...

He replied that at some point you'll hit the heisenberg uncertainty principle which I vaguely remembered.

The conclusion was that who knows if what makes us intelligent is unmeasurable or not, the only way would be trying to buld such a system.

By the way:

What do you know about neuroscience, any books articles/papers interesting to read about it ?

I have a brief understanding on how de neurotransmitters / neuroreceptors works.I know that it's still a huge area of reasearch, that they're so complex that instead of working with a certain neurorecpetor they're tackled in groups depending on which neurotransmitters affects them, for example serotonin and the 5-HT group.

Yay... (1)

Taken07 (1395851) | more than 5 years ago | (#25852691)

It's a good thing I watched all those Terminator movies and TV show ... I'm prepared.

Complexity of Cat Brain != Cat Brain (1)

tristanreid (182859) | more than 5 years ago | (#25852885)

You don't even have to RTFA, it's right there in the summary.

I'm just saying.

-t.

Not just IBM - HP and HRL too. (1)

Alexey Nogin (10307) | more than 5 years ago | (#25853071)

The article is based on the IBM's press release and is misleading because of it. In fact, there are three competing teams - one lead by IBM, one lead by HP and one lead by HRL Laboratories [hrl.com] . See also the FBO website for more information about this program [fbo.gov] .

This really should be a Grand Challenge (1)

John Sokol (109591) | more than 5 years ago | (#25853099)

The High Priests have already had there chance to do this and failed, repeatedly. Now they are just throwing more money at them.

This should be an open grand challenge with clear rules like the autonomous vehicle challenge was.
http://www.darpa.mil/grandchallenge/ [darpa.mil]

Even I was surprised at how well they managed to get these cars to drive themselves.

I am sure the same would happen with other AI problems if a large enough prize was put out there.

Why no worm brain simulations? (0)

Anonymous Coward | more than 5 years ago | (#25853135)

Seems like they're being too ambitious. Seems like you'd want to start with something simpler, like Caenorhabditis Elegans, which has 302 neurons. Once you have an accurate simulation of that, THEN move on to something more complicated.

http://www.setiai.com/archives/000050.html [setiai.com]

If you think outsourcing was bad: (1)

Tablizer (95088) | more than 5 years ago | (#25853377)

The longer-term goal is to create a system with the level of complexity of a cat's brain

You: "Hi, is this the customer help desk?"

Help Desk: "Meow"

You: "My disk is stuck in the CD drive."

Help Desk: "Meow"

You: "What?"

Help Desk: "Meow Meow"

You: "So, can you help get my disk out?"

Help Desk: "Meow"

You: "The line must be bad. I just hear cat sounds."

Help Desk: "Meow"

You: "(Sigh) I'm going to call back later."

Help Desk: "Meow"

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>