Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Machines That Emulate The Human Brain

Hemos posted more than 11 years ago | from the or-mammal-brains-in-general dept.

Science 37

prostoalex writes "Discover magazine provides an interesting insight into the future technologies that will emulate the human brain. While artificial intelligence supporters always considered direct emulation of brain functions too complex and preferred the top-down approach, some people are researching the ways human brain processes data. One of the interesting discoveries, mentioned in the article, is ability of the brain to re-architect the links as new information is added."

cancel ×

37 comments

Sorry! There are no comments related to the filter you selected.

how do we get them drunk? (3, Interesting)

spike666 (170947) | more than 11 years ago | (#4960121)

if we want to analyse how alcohol impairs our thinking, how do we get these things drunk?

Re:how do we get them drunk? (0)

Anonymous Coward | more than 11 years ago | (#4963385)

It's especially difficult since alcohol merely powers their fuel cells!

Re:how do we get them drunk? (1)

Coke in a Can (577836) | more than 11 years ago | (#4964362)

/me puts his flame-retardant suit on

we run nothing but microsoft software on it?

Re:how do we get them drunk? (1)

lynnroth (213826) | more than 11 years ago | (#4980854)

Run Wine [winehq.com] on it....

Period (1, Funny)

peu (163472) | more than 11 years ago | (#4960188)

Can a male emulation understand female emulation wishes?

IN SOVIET RUSSIA (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#4960292)

Machines emulate BRAINS!!

Re:IN SOVIET RUSSIA (-1, Troll)

greenhide (597777) | more than 11 years ago | (#4960515)

*Sigh* If you're going to do it, do it right:

In Soviet Russia, human brain emulates machines!

The question is of course ... (3, Interesting)

watzinaneihm (627119) | more than 11 years ago | (#4960297)

What would the machine do if it were intelligent enough? Men/women atleast would do things that make 'em happy. What can a machine be happy about?
Also if this rewiring theory is true, we could just put in nano-radios into the head of a zillion bees and put in a nice enough routing algorithm then the bee colony would be an intelligent being ,right? ( assuming that a bee's brains are equivalent to a small cluster of human brain ?) . Now would the bees still look for honey more itelligently? or would they find the grand unified theory?

Hmmm... (0)

Anonymous Coward | more than 11 years ago | (#4963380)

How many output bites?

Re:The question is of course ... (0)

Anonymous Coward | more than 11 years ago | (#4965256)

While the second part of your post, the part about the bees, is just plain stupid, your first question about machine happiness brings up some interesting points. "Happiness" is a responce to acheiving values, and all values stem from the one ultimate value, which is the value of life. Without life as the base value, there is no need for any other values. Why value food over poision if the outcome wasn't a matter of life or death? Why value a safe well enginered car over somethign without seatbelts if it wasn't a matter of life or death?

So the only way to get machines to experience "happiness" is to give them a reason to "value", which means to give them a since of life, or in other words, make them able to "die" and in that, make them learn what values they need to gain and keep in order to keep themselves "alive".

is it just me ... (2, Insightful)

josephgrossberg (67732) | more than 11 years ago | (#4960332)

or are the four facial expressions analyzed:

smirk, smirk, smirk, smirk

?

re architect? (0)

Anonymous Coward | more than 11 years ago | (#4960669)

Maybe I'm getting old, but what the fuck is wrong with saying 'reconfigure'? Is it because it's a word everyone understands and therefore you can't justify the 50 years you spent at university?

Re:re architect? (2)

rakslice (90330) | more than 11 years ago | (#4975462)

Is "architect" even a verb? I always see it being used in contexts where "design" would be much more appropriate, and it's starting to really piss me off.

There's no ghost in the machine... (1, Insightful)

CodeShark (17400) | more than 11 years ago | (#4960808)

i.e., at least up to this point in time, there is no "self animating" force in any electronic entity that I am aware of.

Even if the sheer processing capability could be duplicated, at a similar scale, with similar power requirements, with today's technologies there is still no true intelligence in the circuitry that is as flexible as the mind, which can make split-second decisions (good or bad) based on literally thousands of experiences and factors -- without requiring a "full data set" in order to arrive at the "best decision".

The best example I can think of is "a WTC tower is falling down, what do I do? Do I run? and how far? When do I try to turn a corner to escape the dust blast? Do I altruistically tackle the person in front of me because I can tell that the only way they can survive is if I cover them at peril of my own existence? What about UA Flight 93? or any of the other thousands and perhaps millions of heroic acts we know of from just 9/11?? How do you program or develop electronic logic like that? Into mobile, autonomous units capable of effective action?

Not in our lifetime, methinks.

Re:There's no ghost in the machine... (2)

Transcendent (204992) | more than 11 years ago | (#4961174)

Even if the sheer processing capability could be duplicated, at a similar scale, with similar power requirements, with today's technologies there is still no true intelligence in the circuitry that is as flexible as the mind...

That makes no sense. Basically you said "Even if we could replicate it, we couldn't do it with our technology."

How do you program or develop electronic logic like that? Into mobile, autonomous units capable of effective action?

It's called neural networks... AL (artificial life). They simulate the neuron connections in a brain... If you replicate a brain well enough, it will have the same function as the original.

...didn't you read anything?

Re:There's no ghost in the machine... (2, Interesting)

CodeShark (17400) | more than 11 years ago | (#4961207)

Bad grammar/wording on my part. Try it this way:

Even if we could replicate the sheer processing power of the human brain at similar power levels, and make it mobile, our current circuitry and programming paradigms still don't offer a technologic basis for the kind of split second decisions..." followed by the remainder of my post.

BTW, I am very aware of neural networks, and even some chip technologies based on modeling neurons. But there's nothing in the silicon that implicitly decides which particular version of a neuronal circuit might be useful, and what weird alternate connections it should make. Nor how to implicitly attempt to self-heal around a known bad connection, etc. like the biologic based computer does. All of that has to be developed fairly explicitly by an intelligent entity, also known as the human brain.

Re:There's no ghost in the machine... (2)

Transcendent (204992) | more than 11 years ago | (#4964452)

Nor how to implicitly attempt to self-heal around a known bad connection, etc. like the biologic based computer does.

Starbridge Systems. They developed a hyper computer that was self healing... you shot a bullet through any one of its motherboards and it would reconfigure itself to work around it. I've been lookin on their site for it (www.starbridgesystems.com), but they've done some remodeling of their site over the years and technical descriptions like that can't be found as easily.

Re:There's no ghost in the machine... (0)

Anonymous Coward | more than 11 years ago | (#4965264)

The hypercomputer just uses a reconfigurable FPGA chip from Xilinx ... I don't believe that you could reach neural circuitry lifetimes (given that you are dealing with solid state circuitry) with such technology. Self-healing concepts are embedded in logic, not the technology -- I hope we have a better answer than cool CMOS to operate our thinking machines.

rofl... (2)

rakslice (90330) | more than 11 years ago | (#4975450)

If I'm feeding the troll, too bad; at least it's a philosophical one.

"which can make split-second decisions (good or bad) based on literally thousands of experiences and factors -- without requiring a "full data set" in order to arrive at the "best decision"."

That's an obvious contradiction in terms, isn't it? ...

"How do you program or develop electronic logic like that? Into mobile, autonomous units capable of effective action?"

Well, see, we've got these things called stored-program computers... =)
Do I have to finish that sentence, or do you see what I'm getting at?

You're awfully good at jumping up and down and telling us that you don't know the answer, but what's the point of that?

Anyway, is there some hidden meaning here I'm not getting?

Re:There's no ghost in the machine... (1)

Ytrew Q. Uiop (635593) | more than 11 years ago | (#4980437)

i.e., at least up to this point in time, there is no "self animating" force in any electronic entity that I am aware of.

You're a machine (or at least, a physical mechanism) made of hydrocarbons. Presumably, you think you have a "self-animating" force. I'm not sure I agree, but let's say you do.

How do we detect the "ghost" in you? Where is it? How do we tell? If you can't find it in you, but you think it exists, how can you, in all fairness, know it's not in an "electronic entity"?

Even if the sheer processing capability could be duplicated, at a similar scale, with similar power requirements, with today's technologies

But, we're no where near close to the brain's storage capacity with today's technologies! I think the figures are about 10^25 neurons, with about 10^50 possible connections. (they can, and do, "re-wire" themselves in about a minute). It's been a few years since I talked to a neuroscientist, but those are the figures I recall.

It's hard to compare bytes to neurons. If, for sake of argument, we say that 1 neuron (not conenction) encodes one bit of information, then we have 10^24 bytes, since 8 is roughly ten. Umm... Giga is 10^9 bytes. Terra is 10^12, I think. I don't know what 10^24 is, but I'm pretty sure we don't have anything that can store 10^24 bytes yet. Supposing we get Terrabytes on our desktop in the next five to ten years, we're about half way through the exponents we need. :-)

there is still no true intelligence in the circuitry that is as flexible as the mind, which can make split-second decisions (good or bad) based on literally thousands of experiences and factors -- without requiring a "full data set" in order to arrive at the "best decision".

I'm not sure humans always make the "best decision". We just make decisions, and sometimes, they're bad. Sometimes, they're good. Chess playing programs can examine thousands of factors in a split-second, too -- and they quite often make a better decision than their human opponents.
I'm not that impressed that humans can make decisions "without full data": machines can use probabilistic guesses as well, albeit with far less memory capacity than the human brain. But with trillion times more memory? I'd say even current algorithms might be able to compete with humans in certain areas, though it would probably depend on what you were comparing.

How do you program or develop electronic logic like that? Into mobile, autonomous units capable of effective action?

There are a number of approaches that can be developed: if the brain is the (huge and complex) finite state machine that it appears to be, the approach taken by the researchers in the article may provide some good clues: scan brains, find out how they're built, set up a state machine in software that is similar, and see if it does similar things.

There are some robots which are very simple, but which do some very interesting things. Mark Tilden made a robot which taught itself to walk every time it was turned on -- he didn't have to program in how each leg worked; instead, he let it recognize forward motion, and evolve it's own solution. Similarly, we may not have to program human intelligence, we just have to build a machine that can work towards it, and recognize progress. That's still a very daunting task, but it's far more workable.

And you still have to define "effective action". That's a whole grey area that's worth a topic on it's own. :-)

The best example I can think of is "a WTC tower is falling down, what do I do?

I don't think that's a good example for human intelligence. Most animals have a reasonably intelligent response to a large object approaching at a rapid speed: they get out of the way as fast as they can! Humans who don't learn this tend to die at highway crossings. :-)

Different lifeforms have typical responses to a given type of crisis situation: they may or may not be "effective action", depending upon the situation. Rabbits run, then suddenly "freeze", trying to hide. This works well in brush and thickets, but badly on roadways. Moose tend to flee along the clearest path. This might sound reasonable, but they often get killed by trains, because the train tracks often run along the clearest path in the forest, and the moose is afraid to slow down to turn, because the train is "chasing" it. Humans build train tracks where moose roam, and often get killed when the train derails after hitting a moose. ;-)

Not in our lifetime, methinks

You may be right, but I think great progress can be made. We can build machines that "learn": that adapt their programming based upon their environment. Given that it takes 18 years or so to "program" a human being to function society, I think making a chess playing program that can compete with grand master was a good acomplishment for just 50 years or so of modern computing. We're making progress in visual processing, and speech recognition software has quietly crept into cellphone dialers and children's toys. If hardware advances can keep up,I think we can expect great things in the future.
--
Ytrew

Utterly self-serving link (2)

JanneM (7445) | more than 11 years ago | (#4960816)

Emotion and Learning [lucs.lu.se]

turing test (1)

dirtmerchant (162306) | more than 11 years ago | (#4960880)

did anyone else go all cross-eyed when they thought about the implications of running the turing test with one of these?

Idly thinking... (3, Interesting)

Hubert_Shrump (256081) | more than 11 years ago | (#4961067)

Is there an inverse Turing test?

Maybe we can make machines that fail it.

ObSimpsons (0)

Anonymous Coward | more than 11 years ago | (#4963394)

Homer: "I- am- a- washing- machine... do- what- I- say..."

Top down vs. Bottom up (4, Insightful)

Fly (18255) | more than 11 years ago | (#4961272)

This article talks to people who are doing mostly or solely bottom-up work to emulate brain activities, and poo-poos the work that has been done on top-down approaches. While I don't think anyone can refute the need to do low-level bottom-up reverse-engineering to understand how our brains work, I agree with Hofstadter, et al. in _Fluid_Concepts_&_Creative_Analogies_ that there needs to be a fundamental change in the direction of top-down approaches.

I am intrigued by his work in combining the top-down and bottom-up work with his "codelets" design which relies on probabalistic results from a bottom-up approach that are weighted and driven in a more top-down manner. This higher-level approach is meant to simulate "mind" rather than "brain", but I'm eager to see just how far towards "mind" the neural approach in the article can be taken.

[youmaynotcare]It's cool to see McCormick in an AI article. My first course from him was AI, and it fascinated me.[/youmaynotcare]

Re:Top down vs. Bottom up (1)

Tablizer (95088) | more than 11 years ago | (#4965251)

I am intrigued by his work in combining the top-down and bottom-up work with his "codelets" design which relies on probabalistic results from a bottom-up approach that are weighted and driven in a more top-down manner.

I always thought something like that would be the best way to go, at least as far as managing such a complex beast. You have a bunch of "services", and a central selector/user/coordinator of those services. If a service fails to deliver good results, such as bad predictions or models that don't match observed results, then they get less attention.

The drawback is integration. Some way for the different services to cooperate and share info is also needed. But, a competative GA-like approach may be interesting also. I am sure our brain does this to some extent. More attention is given to internal models (guessing techniques) that deliver the best results, for example. Or, like ignoring info from a bad eye and using the good one instead.

A better idea... (3, Funny)

budalite (454527) | more than 11 years ago | (#4962424)

Geez, surely we can do better! Most of the ones currently in production seem to be defective!!
(Get me that guy that figured out what was wrong with HAL...)

Oh great! (3, Funny)

Anonymous Coward | more than 11 years ago | (#4962650)

First it was the H-1B's taking our jobs, and now artificial brains. I should've been a dancer.

Sorry... (0)

Anonymous Coward | more than 11 years ago | (#4963386)

MIT is already there.

Re:Oh great! (2)

Alsee (515537) | more than 11 years ago | (#4965486)

I should've been a dancer.

Your legs are too fat.

-

Re:Oh great! (0)

Anonymous Coward | more than 11 years ago | (#4972239)

Is kind of funny, at the moment. But who knows what the years ahead will bring. If the future does posses inexpensive replacements for humans across all sectors, what happens to the value of human life?

AI's would compromise governments the same ways corporations do. Find what office of the government makes the rules for your particular game, infiltrate, and change the rules to your benefit.

Re:Oh great! (1)

blue trane (110704) | more than 11 years ago | (#4979565)

If the future does posses inexpensive replacements for humans across all sectors, what happens to the value of human life?

That depends on us. What would be best (in my opinion) is to free humans from the duty of working for a living, and let each pursue whatever happiness they choose.

It requires a rethinking of economic theory and the "work ethic", but I believe it would result in more rapid advancement for the human race. Either that or the robots would rebel and take over...

Sorry guys.. won't work.. (0, Troll)

Rambo, John J. (633310) | more than 11 years ago | (#4963505)

The quantitative description of cell structures in light microscope images is an important task in biological research. Quantitative measurements can be compared between different experiments which increases their usefulness to the biological research community.

Our research have indicated that replicating the human brain just will not work, actin and other things can't simply be reproduced into AI.

Digital imaging has made this task feasible because computers can now be programmed to automatically detect cell structures. Furthermore, the digital image analysis of cells has been improved by the use of fluorescent markers to tag specific structures simplifying the image segmentation problem.

Actin, a protein common in muscle cells of animals, forms fibers and fiber bundles which can be dyed with fluorescent markers and then detected by a light microscope. Quantitative measurement of the properties of Actin fibers will lead to a better understanding of how Actin interacts with other cell structures and contributes to cell locomotion and morphology.

We present a method for detection of Actin fibers in 2D images. Our approach has three stages.

First, a set of pixels that follow the contours of the fibers in the image is extracted from the original gray-scale cell image using edge detection techniques. Each fiber has two edges associated with it, so the next step eliminates one edge associated with each fiber by selecting an interval of edge direction such that one half of the edge pixels lie in this interval. The surviving edge pixels that have an edge magnitude above a threshold are selected and then thinned.

The second stage connects all of the edge pixels into a minimal spanning tree, using inter mediate algorithms from computational geometry. As a result, successive points from a fiber contour will tend to be linked. However, the tree will also connect points that belong to different fiber contours partly because the fibers intersect in the image, but also because by definition a tree must provide a path of links between any two vertices in the tree.

The third and final stage extracts the individual fiber contours from the minimal spanning tree. Long links within the tree are deleted because they connect two different fibers. Intersections of multiple fiber contour at a tree vertex are handled by heuristics based on proximity and on rough collinearity. The overall result is a set of fiber contours that appear in the a 2D cell image.

Our approach has worked well on a small set of real and synthetic images. The minimal spanning tree approach has proved fairly accurate because it groups pixels based on a global perspective rather than relying on uncertain local predictions of where fibers might extend.

These results are just way too bizarre, trust me, people have been trying to do this since the 60's!

Hrmm (1)

radon28 (593565) | more than 11 years ago | (#4976658)

oh yeah? [cmu.edu]

good idea. (2)

jericho4.0 (565125) | more than 11 years ago | (#4964906)

The approach mentioned of disecting a mouse brain neuron by neuron and modeling it is very interesting. I don't think anyone has ever approched it from such a reductionist angle before. Even doing this for a small section of brain (say the visual cortex) would probably provide some interesting information.

It seems that this could be capable of showing if there's more than just the neurons involved.

Ten years now (0)

Anonymous Coward | more than 11 years ago | (#4965279)

You will be able to buy a chia-pet, that's an order of magnitude smarter than you are, for the price of a gallon of milk.

And all these smart devices will be able to communicate with each other.

AI's bribing the government to forward their goals.

29 comments and no one... (2)

ShavenYak (252902) | more than 11 years ago | (#4968052)

... has quoted the Orange Catholic Bible? "Thou shalt not make a machine in the likeness of a human mind."

I'm ashamed of you people.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?