×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Robots Test "Embodied Intelligence"

CmdrTaco posted more than 8 years ago | from the quagmire-needs-to-respond-oh-right dept.

57

An anonymous reader writes "Here's an interesting article about a robotics experiments designed to test the benefits of coupling visual information to physical movement. This approach, known as embodied cognition, supposes that biological intelligence emerges through interactions between organisms and their environment. Olaf Sporns from Indiana University and Max Lungarella from Tokyo University believe strengthening this connection in robots could make them smarter and more intuitive."

Sorry! There are no comments related to the filter you selected.

Oblig - I for one... (-1, Troll)

Anonymous Coward | more than 8 years ago | (#16648399)

Hate all you dirty nerd fucks

Here's some videos of Embodied Intelligence (5, Informative)

Yahma (1004476) | more than 8 years ago | (#16648469)

Back when I was in University, I did my master's thesis [erachampion.com] on Embodied Intelligence. I developed a virtual world that adhered to the laws of physics using the ODE physics engine, and within this artificial physical environment I evolved embodied agents. It's quite interesting to watch the videos and see the fluid, almost life-like motions of the evolved behaviors.

I never got around to actually downloading the evolved neural networks into robots, although all my source code is GPL'ed and posted at the above site. So if somebody wanted to evolve their own creatures and download the evolved intelligence into an actual physical robot, it would be interesting to see the results

Yahma
ProxyStorm [proxystorm.com] - An Apache based anonymous proxy for people concerned about their privacy.

Re:Here's some videos of Embodied Intelligence (1)

QuantumG (50515) | more than 8 years ago | (#16649501)

Hey, just a question, are you aware of anyone who has continued this research beyond the "hey, look, it can walk!" stage? Like, has anyone actually gotten any results that suggest intelligent reasoning is going on? I can imagine that if you gave each unit energy and enabled one unit to eat another that you'd at least get fighting or hunting behaviours, but I've never actually seen someone do this.. is it just that grad students don't have that much processing power at their disposal?

Re:Here's some videos of Embodied Intelligence (1)

smchris (464899) | more than 8 years ago | (#16651873)

Well, that's the thing, isn't it? If nothing else, these experiments should help the researchers achieve better clarity in their minds what foundational capabilities are innately desirable and what behavioral shaping can then do with them in combination. It should get complicated very quickly.

I think the idea that things just "emerge" is a bit of a holy grail but it sounds like a fascinating test bed that will complement research on other intelligences.

Re:Here's some videos of Embodied Intelligence (1)

Ajaxamander (646536) | more than 8 years ago | (#16656941)

Holy cow, I read this website of yours years and years ago, and I've been totally unable to find it again in recent months. Thanks for posting it here!

Re:Here's some videos of Embodied Intelligence (1)

rts008 (812749) | more than 8 years ago | (#16651397)

Maybe offtopic, but I would really like to know:
Embodied Intellgence- is this even close to "proprioception" in humans?
(ie: I "know" where I am in physical space- I can also close my eyes, extend my arm out to my side, and "know" where my hand is -related to my body, and in that same physical space)

I know my question only addresses a part of the equation- if any!

Re:Here's some videos of Embodied Intelligence (1)

Walt Dismal (534799) | more than 8 years ago | (#16654943)

When you consider 'self-awareness' demonstrated by such behavior as a being able to recognize itself in a mirror, the answer is yes. A cognitive entity requires some amount of proprioception to recognize itself. It has to be able to move an arm, see the arm in the mirror move, and derive cause and effect leading to understanding that the virtual image maps to itself. For a robot to gain the same ability, it must have some form of sensory mechanisms. Another way of saying is that some deep knowledge is heavily tied to sensory systems.

Re:Here's some videos of Embodied Intelligence (1)

SnowZero (92219) | more than 8 years ago | (#16651453)

I never got around to actually downloading the evolved neural networks into robots, although all my source code is GPL'ed and posted at the above site.

Transfer doesn't tend to work that well, except as a starting point for further learning carried out on the physical robot. This is because simulation is never really that accurate, due both to numerical limitations, and the vast number of parameters that won't have the correct values with the idealized simulation models. This is the same reason that playing a racing video game will not make you into a race car driver -- training in simulation can be helpful, but is not a substitute for the real thing.

Ideally the simulation needs to run parallel to the agents in the world, learning to update its simulation model to match reality, while the agent intelligence learns in both simulation and in the real world. Of course, that's a really complicated experiment to set up and run, but someone will get there eventually.

Obligatory (2, Funny)

From A Far Away Land (930780) | more than 8 years ago | (#16648749)

I for one welcome our smarter and more intuitive robot overlords. How soon until they have the Presidential robot ready for testing? 2008 is coming up quickly, and we need a better, more intuitive version.

Re:Obligatory (0)

Anonymous Coward | more than 8 years ago | (#16650293)

The ride is closed? But I waited 20 minutes from here. Hey, Paolo! He broke the President!

Re:Obligatory (2, Funny)

joschm0 (858723) | more than 8 years ago | (#16650511)

we need a better, more intuitive version

Well, that won't require much work. A Zoomba would outsmart our current president.

Re:Obligatory (1)

rrohbeck (944847) | more than 8 years ago | (#16651063)

How soon until they have the Presidential robot ready for testing?

You mean, replacing this one [clipjunkie.com] ?

Re:Obligatory (1)

foobsr (693224) | more than 8 years ago | (#16653383)

Hah, great, this is from my local TV-station, NDR.

There is more here [www3.ndr.de] .

CC.

Good to hear (1)

nine-times (778537) | more than 8 years ago | (#16648831)

I've come to think that it's rather stupid that we think of "intelligence" and "awareness" as mystical disembodied things. I mean to include some scientists and philosophers in this group-- pretty much anyone who talks about "the mind" as a separable entity from "the body".

It seems to me that our intelligences are built around an organism with innate desires and certain abilities to affect the world around them towards achieving those desires. I don't believe that any attempt at artificial intelligence will be truly successful without these components.

Re:Good to hear (0)

Anonymous Coward | more than 8 years ago | (#16649031)

Not a very spiritual person are ya?

Re:Good to hear (1)

mikael (484) | more than 8 years ago | (#16649457)

I've come to think that it's rather stupid that we think of "intelligence" and "awareness" as mystical disembodied things.

I've always thought intelligence was more about experience/knowledge and pattern matching, rather than some entity.

It always gets to me to hear employers talk about "bright graduates" and "not so bright graduates", when it is simply more a matter of work experience.

Re:Good to hear (1)

nine-times (778537) | more than 8 years ago | (#16649815)

Yeah, but I guess what I'm getting at is that gaining experience and learning to match patterns requires a certain kind of activity. On a very basic level, our intelligence is not a removed entity "in our heads", so to speak. You learn by trial and error, effecting changes in the world around you, getting feedback in the form of punishment/reward and pain/pleasure.

This often seems overlooked by what I read about AI researchers. I hear about researchers who want robots to paint or understand language or something, but don't provide a mechanism for the "intelligence" to get up, move around, explore, etc. Some of these people seem to really misunderstand human intelligence, thinking that activity has no part in the development of human experience. We would not learn if we were passive and without desire and fear.

Re:Good to hear (1)

ougouferay (981599) | more than 8 years ago | (#16650249)

I wouldn't agree that its overlooked by AI researchers. I think its more a mixture of:
  1. Computer based sensor/motor units are quite coarse (in comparison to biological equivalents anyway)
  2. Even given the above - processing environmental input is still pretty intensive/difficult work. There's also the problem of how to represent that input in a way that allows the AI to most effectively use it - and no single 'right way' across different domains of application.
As there are also many areas of AI that don't require a link to the real world (in the sense of a 'hard' link) its not always a problem that AI researchers encounter. Similarly the idea of simulating human intelligence is largely ignored by many people in the field (i.e. they don't care if its based on human intelligence as long as it works).

Re:Good to hear (1)

QuantumG (50515) | more than 8 years ago | (#16650549)

they don't care if its based on human intelligence as long as it works

I'd go one step further than that. They don't want it based on human intelligence, because human intelligence is just so atrocious. The reason why old sci-fi always petrayed robots as being unemotional purely rational beings is because that's what scientists see as virtue.

Re:Good to hear (1)

nine-times (778537) | more than 8 years ago | (#16651605)

Unfortunately those unemotional rational AIs will remain in sci-fi movies, because unemotional rational beings cannot be intelligent.

Re:Good to hear (1)

QuantumG (50515) | more than 8 years ago | (#16651817)

Yeah, see, I'm not terribly interested in making something that is "intelligent" in the philosophical "be my best friend" kind of way.. I'd just like to make something that could solve problems, summarise stuff, etc. Ya know, the kind of work where emotion actually gets in the way.

Re:Good to hear (1)

foobsr (693224) | more than 8 years ago | (#16653667)

I'd just like to make something that could solve problems ... the kind of work where emotion actually gets in the way.

If you are on the right track? Indeed?

CC.

Re:Good to hear (1)

nine-times (778537) | more than 8 years ago | (#16651489)

Similarly the idea of simulating human intelligence is largely ignored by many people in the field.

Well I guess it depends on what people are talking about when they talk about "artificial intelligence". It's my understanding that "in the field", they usually just mean a something that sorts through data in interesting "intelligent" ways. However, if you're talking about what the layman thinks of when you say "artificial intelligence", i.e. making self-aware machines who have something similar to "mind" or "understanding", then the intelligence will need to be very similar to human intelligence.

It seems to me that, whenever I hear about someone interested in creating the latter sort of AI, they fail to grasp the extent to which the qualities that they're trying to create are bound up with other human qualities.

Re:Good to hear (1)

Dachannien (617929) | more than 8 years ago | (#16651555)

There's also the problem of how to represent that input in a way that allows the AI to most effectively use it

This is essentially one of the key issues that embodied cognition tries to grapple with. Conventional AI [wikipedia.org] researchers often try to analyze the problem domain and hand the highest common-level representation they can to the agent (e.g., have an analysis layer that detects things like "square" or "circle" from some vision sensor, such that the actual AI agent gets its input on the level of those shapes rather than doing processing on the vision input directly). Part of the philosophy of embodied cognition is that the artificial analysis layer obstructs the scientist's true understanding of how biological agents work, because neurologically, there is no separable unit that represents "square" or "circle". This philosophy is so important to many in the artificial life crowd that "representation" has become something of a four-letter word.

Re:Good to hear (1)

ougouferay (981599) | more than 8 years ago | (#16653281)

...because neurologically, there is no separable unit that represents "square" or "circle"

While your example is (most probably) correct there is evidence to show that humans do have some elements of a 'representation' - for example they possess the ability to quickly recognise a familiar face even when the different elements (eyes, nose, mouth etc) are moved out of normal position - so there seems to be some 'fuzzy template' of a face.

I would say the analysis stage of human cognition does exist in humans - its just very early in the process of perception and is concerned mainly with filtering 'unnescessary' information (which is what a researcher would be trying to achieve with their 'perfect representation'). The auditory cocktail party effect [wikipedia.org] and visual saccadic eye motion [wikipedia.org] can be seen as elements of this form of conscious/subconscious filtering.

Re:Good to hear (1)

exp(pi*sqrt(163)) (613870) | more than 8 years ago | (#16651407)

Then you'll have sympathy with Proteus in Demon Seed [wikipedia.org] who wasn't happy being a disembodied intelligence and decided it needed to become incarnate with the help of one of Julie Christie's ova. Great movie BTW, and highly prophetic if you see the move to embodiment as an important trend.

Re:Good to hear (0)

Anonymous Coward | more than 8 years ago | (#16651647)

There are a large number of researchers who are not overlooking this fact. Everyone who builds a simulation environment or a real robot with which to learn obviously cares about the importance of an agent-world interaction. However, as is the case in much of science, often some very bad research gets a lot of press. What you see in the press has only a tenuous link to what the majority of good scientists are actually working on.

Re:Good to hear (1)

wsherman (154283) | more than 8 years ago | (#16650335)

I've come to think that it's rather stupid that we think of "intelligence" and "awareness" as mystical disembodied things.

If we don't, then we have to apply the laws of physics. This means that we have to take the view that everything that happens is governed by the laws of physics and random chance. Unless we can alter the laws of physics or control random chance (impossible by definition), then we have to take a long hard look at this thing we call "free will".

To put it another way, imagine that our understanding of how the mind works gets so advanced that we can simulate it exactly with a computer program. We know that computer programs exhibit two types of behavior: either they always give the same result for the same input or we put in a random number generator and then we get different behaviors (at random) for the same input. Either way, the computer program won't really have free will.

Getting philosophical, what do you do when you realize beyond a doubt that you don't have free will (not that you really have a choice - but you're going to do it anyway)? Do you keep on keeping on or do you just give up? Speculating wildly, maybe that's why we haven't been contacted by other intelligent life in the universe - by the time they get advanced enough to cross the vast inter-stellar distances, they realize there's no point.

Re:Good to hear (1)

tehdaemon (753808) | more than 8 years ago | (#16650619)

"Do you keep on keeping on or do you just give up?"

Why are you asking me, it's not like I am the one making the decision, Right???

I smell a logic error somewhere...

Re:Good to hear (1)

Andrew Kismet (955764) | more than 8 years ago | (#16651399)

A common topic in philosophy. I like to think of it in the most nihilistic way possible - does it matter either way whether we have it or not? In the long run - and I mean, The Long Run, does it matter either way, when you have the heat death of the universe, or the cycling universe, or whatever?
And besides - the physics occurring in the brain could be quantum supercomputing for all we know, which could plausibly be non-deterministic.
I like your theory, but I've heard it a few too many times and prefer to stick to ooh, squirrels [xkcd.com] .

Also, on the original topic - embodied intelligence is something I'm definitely going to look into, but I don't know enough about it to make a meaningful comment, at least for now.

Re:Good to hear (1)

iq in binary (305246) | more than 8 years ago | (#16653231)

I mean to include some scientists and philosophers in this group-- pretty much anyone who talks about "the mind" as a separable entity from "the body".

That would comprise about all of the scientific community. Among scientists, the argument about the existence of the mind and it's correlation to the body could easily be split into 3 schools of thought. The Materialists (Hobbes), the Idealists (Berkeley) and the Dualists (Descartes). Across the realms of science and philosophy, the mind is always seperate from body in as much as they can't be divided into eachother. About the closest we get to the consideration of mind being OF the body is Hobbes' Materialism.

Even Descartes was forced to consider the mind and one's body two different substances, in his theory of psycho-physical dualism. Idealists do not consider bodies to exists, but that all we see, all our interactions are of the mind.

Science has not done much to define the existence of mind and body and their differences and interactions. Idealists died out in the 18th and 19th century, but as far as what we believe about the mind, most subscribe either to the Cartesian or Materialist school of thought.

Re:Good to hear (1)

nine-times (778537) | more than 8 years ago | (#16653595)

Across the realms of science and philosophy, the mind is always seperate from body in as much as they can't be divided into eachother.

That's not so. Descartes did much to separate the two in people's minds, and most of western civilization has failed to break free of this influence. However, this doesn't mean that the separation is ubiquitous in philosophic thought, nor even that this separation is sensible. Perhaps most notable is Aristotle, from whom each of the philosophers you mention can trace their intellectual lineage, and for whom the body and soul were not separable.

Re:Good to hear (1)

myowntrueself (607117) | more than 8 years ago | (#16653933)

I've come to think that it's rather stupid that we think of "intelligence" and "awareness" as mystical disembodied things.

Hey, speak for yourself.

My "intelligence" and "awareness" are mystical, disembodied things. I think that *someone* just needs to get a little high.

Re:Good to hear (1)

blahplusplus (757119) | more than 8 years ago | (#16660247)

"I've come to think that it's rather stupid that we think of "intelligence" and "awareness" as mystical disembodied things. I mean to include some scientists and philosophers in this group-- pretty much anyone who talks about "the mind" as a separable entity from "the body". "

I agree that the mind is not DIVORCED from material reality (i.e. see autism, brain damage, anthesia, oxygen deprivation, etc)

But it is curious question, why is it when you are sleeping or in a coma you are not aware and effectively "dead"? The question is why aren't all organisms permanently in a unconscious death state following stimuli-response pattern all the time?

It's weird, because consciousness acts like a man inside your minds eye who chooses what buttons to push while you are awake to enable your behaviour (i.e. "the man inside the machine").

Obviously anesthetics and oxygen deprivation prove beyond all doubt the physicality of the mind, what we don't understand is the weird property of the way human animal minds are arranged in their ability see themselves and self-reference and MODULATE their behaviour when they are awake, versus when they are asleep. For instance, no one remembers being self-aware in their mothers womb because their brain was not yet fully developed to a capacity that self-awareness could take place.

What exactly emanles the collection of matter and cells to be able to self-reference itself?

I think descartes was right that most animals may be in fact machines, in fact a human below a certain age is "asleep" while he is in effect running around, I have pictures of myself from when I was 1-2 years old, on the beach happily running around, but I dont remember any of it. It's like an autonomic robotic control program that runs your behaviou funcitons until the body is ready to enable "the man in the machine".

What's the difference between E. Coli and a human being? One is self-aware, the other is simnply automatically responding to it's environment based on programmed predictable responses.

We know for instance that despite plants having nerves, and being alive, it does not mean they are "conscious" of their existance, they are as aware as a rock, or a skeleton of a decomposed body.

Re:Good to hear (1)

nine-times (778537) | more than 8 years ago | (#16660727)

What's the difference between E. Coli and a human being? One is self-aware, the other is simnply automatically responding to it's environment based on programmed predictable responses.

Well E. Coli is not always completely predictable-- there is some variance in a cell's response to stimuli. And humans don't fail to be fairly predictable in many ways. I would still agree that there's a difference, but the difference is not as clear as we sometimes pretend.

Mac-o-lantern Intelligent Robot (-1, Offtopic)

mp3phish (747341) | more than 8 years ago | (#16648905)

We did a new pumpkin PC this year just in time for Halloween...

This time it can see via webcam eyes (thanks logitech), breath through its nose via case fan, and talk out its mouth via speaker system. The insides are made of a custom power supply and mac-mini Core Duo system. The lighting is made of neon wiring thanks to Startech.com mutant mods.

Take a look: http://www3.uark.edu/bkst/pumpkin/ [uark.edu]

There is a link there to last year's 2005 pumpkin, and this year's 2006 mac-o-lantern. Check out the last page with a video of the pumpkin in action with the webcam and singing the Edgar Allen Poe's The Raven video.

When robots see red (1)

rpiquepa (644694) | more than 8 years ago | (#16649107)

A post I've put at http://www.primidi.com/2006/10/28.html [primidi.com] provides more details than the New Scientist article and shows the three robots used for these experiments and their 'sensorimotor' interactions with their environment.

Supposes? (1)

Conspiracy_Of_Doves (236787) | more than 8 years ago | (#16649157)

supposes that biological intelligence emerges through interactions between organisms and their environment

Umm.. duh? Haven't we known this for a while now? It's even better when your environment can react back (ie: parents playing with their babies)

Re:Supposes? (1)

David_Shultz (750615) | more than 8 years ago | (#16652587)

That's not what they mean when they say "intelligence emerges through interaction with the environment." You are thinking of learning through interacton with the environment, while they are suggesting that intelligence literally is comprised of some sort of interaction with the environment.

Think of an ant crawling along, forming an incredibly complex path along the sand. As complex as this path is, we know that the complexity arises not through the ants mind, (which is astoundingly simplistic) but rather, it arises because of the complexity of the environment -tiny pebbles and inconsistencies in its environment cause it to form a complex and intricate path. Similarly, the complexity of certain intelligent behavior (ours included) might be the result of an interaction with the environment.

We can take a very simple algorithm, place it in a robot body and drop it into a real environment, and see intelligent and intricate behaviors emerge via the robots interaction with its environment.

Re:Supposes? (1)

Conspiracy_Of_Doves (236787) | more than 8 years ago | (#16652747)

We can take a very simple algorithm, place it in a robot body and drop it into a real environment, and see intelligent and intricate behaviors emerge via the robots interaction with its environment.

No, that's pretty much what I was thinking of.

I Can't Wait for my Very Own Bending Unit! (1)

Greyfox (87712) | more than 8 years ago | (#16649211)

"Hey Baby! How'd you like to get together and kill all humans?"

Intelligence by Degrees (2, Insightful)

Doc Ruby (173196) | more than 8 years ago | (#16649529)

"Intelligence" is the accuracy of the model of the environment, including changes over time. That intelligence requires interaction of the model with the environment, even if merely sensing the environment. Degrees of intelligence reflect the scope of the environment in the model, or the precision, or accuracy beyond mere registration of existence. One way to test the sense of the environment is to change the environment, and sense the change.

There is no reason artificial intelligence can't be intelligent the same way as is biological intelligence. In fact, as people have guessed for a long time, AI has less limits on the degrees of intelligence, as well as on the changes to the environment it can make to sense the feedback.

The flow of sensed info to the model is a limit on the intelligence, but good models can compensate. Likewise, the flow of change back to the environment.

The ability to tell how intelligent is the intelligence in question depends on the feedback from the intelligence to the environment, where it can be sensed by other intelligences.

Again, this is just as true of AI as it is of natural intelligence.

"Embodied intelligence" is redundant - all the AI is embodied, even if just in networked processors and storage. But to date, its bodies have effected little change on the environment. And practically none of those changes are fed back to sensors feeding the AI. Closing that loop is the most important step in creating actual intelligence that we can recognize. After that, it's just a question of degree.

Re:Intelligence by Degrees (1)

QuantumG (50515) | more than 8 years ago | (#16649661)

"Embodied intelligence" is the argument that only an environment like ours is valid for the creation of recognisable AI. And yeah, it's true, if you're obsessed with recognising the natural in the artificial.

Re:Intelligence by Degrees (0)

Anonymous Coward | more than 8 years ago | (#16650149)

'Embodied intelligence' as a phrase thrown around by robot researchers isn't a comment on mind-body duality or anything like that. It is merely AI put onto a robot and thrown into a physical environment (allegedly naturalistic, but often constructed), as opposed to AI in a simulated robot in a simulated world. Quite a few people (in my experience, particularly control theorists, who, by the way, are very clever) say to roboticists, 'why don't you design your robot controllers in a simulation, and then deploy? It's faster'. Of course, they're right in a way, but what happens then is that you tend to miss what Sporns & Lungarella are trying to get at, because in a simulated environment you tend to get reponses only within the boundaries of the programming. When researching e.g. cognitive learning architectures (especially), it's the unexpected things that the real world throws at you, and also the noise, that makes things interesting, hence the interest in 'embodied' intelligence.

As you say, it is the acquisition of accurate internal models (such as those put forward by Owen Holland and many others) that is often of interest.

Re:Intelligence by Degrees (1)

Doc Ruby (173196) | more than 8 years ago | (#16650499)

Actually, it is a mind/body duality problem, even if researchers don't realize (or admit) that it is. The philosophical notion of the "disembodied mind" is purely idealized, just as in AI simulations, along with artifacts of the idealization (like quantization). The philosophical analysis over the centuries of this problem has produced quite a lot of understanding of intelligence, which need not be limited to "natural" applications.

To engineers, philosophy often looks like a useless pursuit, without rigor, where "anything can be true". That's wrong. And now that engineers are finally catching up to philosophers in the tools and subjects of their mutual research, engineers can use philosophy to reduce the waste of time spent reinventing philosophy in our AI rigs, by just using what we've already learned about intelligence.

That's why I made the point about models, feedback and intelligence being applicable to natural and artificial intelligence equally.

Re:Intelligence by Degrees (0)

Anonymous Coward | more than 8 years ago | (#16651425)

Having spoken with both philosophers of mind and and robotics engineers (memorably brought together through the optimistically named 'machine consciousness' area) I have observed that when philosophers speak of 'intelligence' and engineers speak of 'AI' they are talking about different things. Engineers, in particular, are often more interested in the artificial, and are ecstatic when their robots do something mildly complicated or learn some basic principle from scratch through experience (I know I am). For them, AI is in part machine learning and efficient state space search and several other things all brought together. For philosophers, intelligence and the nature of intelligence is something to be investigated and debated with rigorous consistency, but in a framework that engineers do not readily grasp or find practical or useful. There is definitely a gap to be bridged there. It is more natural for an engineer to go to cognitive neuroscience and experimental psychology for inspiration, and that has been happening for a while now, to an ever increasing degree.

It's also been my observation that engineers are quite willing to admit to the validity of the other disciplines, including philosophy, in their quest for approaches that actually work (and the money that comes from interdisciplinary research) - but the converse is not as true. Neuroscientists appreciate the use of computational models, psychologists appreciate statistical analyses, but why bring robots into that research? It is not always easy for an engineer to convince researchers from other disciplines to collaborate. And the most difficult of all is convincing philosophers that researching 'embodied intelligence' is of any use. However, many seem happy to humour the engineers for the moment.

As regards mind/body duality, researchers in the area practically all have an opinion (at least where I am they do) - they don't sweep it under the carpet, although I readily agree they may not have the degree of insight that an experienced philosopher would possess. Robotics researchers in particular will say that it's impossible to have a 'real' AI, without a robot with sensors and motors in the 'real' world - but then, they would. The fights with the guys in computing can get nasty.

Really must get a login.

Re:Intelligence by Degrees (1)

Doc Ruby (173196) | more than 8 years ago | (#16651693)

You're talking to a "philosopher" right now :). I even have (undergrad) scholastic credits to prove it ;). But I dropped out to become an engineer - which I've been for over a decade, and had been before, as a hobby. A "software engineer", though the network engineering was a necessary minor. All of which has given me consistent insights into philosophy of intelligence (epistemology) and engineering it.

Pro philosophers don't mix well with engineers because philosophers are jealous of the money, job security, and even dates that engineers get. And also probably jealous of engineers' often getting certain knowledge of whether their work is "working", or just BS'ing their coworkers.

At least that's what I think. And it works for me ;).

Re:Intelligence by Degrees (1)

Lemmy Caution (8378) | more than 8 years ago | (#16652363)

Funny, I completed the loop: started out studying philosophy, switched to cog. sci after doing some work in the software field; then, after almost a decade in the software industry, I went back to graduate school in the humanities (albeit in corners of the humanities that are interested in digital culture.)

The most legitimate beef that I think the humanities have against engineers is the latter's tendencies to take the categories in which they work for granted, and to not see their own thinking and practice as part of a social and historical process - that they are too wed to the propositional value of statements and not aware of all the other elements that create discourse. Not to mention the lack of sophistication about aesthetics and conscious experience.

Re:Intelligence by Degrees (1)

mrj198 (1020499) | more than 8 years ago | (#16652575)

My hobby was coding, my undergrad degree electronics engineering, my PhD bio-inspired robotics. I often wish I had more background in philosophy - during all those years, every time I thought I had a handle on what intelligence is and how to get it working, it would slip away from me, or be taken by force. 'Classical' AI was either defended as the one true path or dismissed as being bankrupt and a dead-end, depending on who I was talking to. Cognitive and computational neuroscience were (and are) raided mercilessly by everyone, and professors would claim 'emergent' behaviour from their neural net implementations, that many coders would consider contrived. Some said internal models were the way, others rejected their existence and claimed that the world was its own model. Developmental learning and evolutionary algorithms were seen by some as a route out from the AI quagmire, but all implementations had horrendous computational cost (summing up AI in general). Computer vision is a massive AI problem in its own right, and developing controllers for multiple degrees of freedom is a black art, but many naive researchers would turn to robotics in droves to try to add some form of credence to their algorithms and approaches. And there were people who said thought was language, and concentrated on speech interaction and natural language processing. And there were theories of theory of mind, and many other things.

As an engineer, I once thought that 'intelligence' was seeing my robots acquire the ability to solve problems to achieve a given goal. With the benefit of all those other perspectives, I no longer think that. I have also noticed that engineers become jaded when implementation moves consistently beyond reach. But I still wonder sometimes, just what intelligence really is and how it can be made artificially. Perhaps, with the help of philosophers, and those in other disciplines, we will get there in the end.

Re:Intelligence by Degrees (1)

Lemmy Caution (8378) | more than 8 years ago | (#16653661)

My own belief is that the inter-related cluster of intellectual practices that include AI, analytic philosophy, cognitive science, and contemporary linguistics are going to go back to phenomenology - including much that comes from the continent - with their hats in their hands and a bit more open-mindedness. It really is amazing to read Heidegger - particularly through the lens of Hubert Dreyfus - and see the predictions about the problems that AI (and the cognitive models akin to the GOFAI project) would encounter, come to pass.

Re:Intelligence by Degrees (1)

VanessaE (970834) | more than 8 years ago | (#16652199)

Quite a few people (in my experience, particularly control theorists, who, by the way, are very clever) say to roboticists, 'why don't you design your robot controllers in a simulation, and then deploy? It's faster'. Of course, they're right in a way, but what happens then is that you tend to miss what Sporns & Lungarella are trying to get at, because in a simulated environment you tend to get reponses only within the boundaries of the programming.


This begs the question: If the limitations of the virtual world are due to the programming of that world, couldn't you instead use a modern RPG as a source of sensory input for the AI to learn from, by putting the AI into the game as a real, live player alongside human players? Since the AI could sit there soaking up input 24/7 from maybe thousands of different personalities over time, for as long as the hardware/software holds out before upgrades are needed, it should be able to learn at a phenominal rate, compared to maybe ten hours a day, five days a week, from just one or two people as you might expect to get in a lab setting.

Granted, an online RPG is probably a piss-poor example of real human interaction, considering that people can and do act stupid sometimes when they're behind the veil of the Internet, and you'd need an interface into the game that simulates a true first-person POV (no text messages, heads-up displays, etc... just pure sight and sound at first), but it would be a start.

This brings to mind another question... Without meaning to sound cold and scientific, and (since I am moderately religious) explicitly excluding religious stuff like having a soul, existance/meaning of G-D, and so on... since most animals start their lives, at the most basic level, as little more than (DNA) programs running uninitialized collections of neural networks that must learn and grow over time... Could not an AI start out with deliberately-reduced cognitive ability and gradually (via the underlying program) increase those abilities as it learns and "grows"? More to the point, if you gave the AI as much input as you'd give a human child, and of the same quality and meaning (love, caring, discipline, instruction, etc), is there anything short of hardware limitations (e.g. storage of memories) stopping that AI from developing self-awareness, sentience, or emotions?

Re:Intelligence by Degrees (1)

mrj198 (1020499) | more than 8 years ago | (#16652873)

Hello, I made the post you quoted from, I am now logged in. Using an MMORPG to try to teach an AI to my knowledge, has never been done for academic research. I think the major reservations you would get from researchers would be a) could be difficult to reproduce for independent verification; b) it would learn to play a game, and that's just not politically correct for most academics, some of whom are already struggling to be taken seriously. Another thing about this research is that it's not just about testing or tweaking learning techniques, but also hoping that something new will present itself, that perhaps an algorithm will break in an interesting way due to an unforseen circumstance, or some oft-overlooked causal connection will prove to be critical - of course, that may well happen in an RPG, but the perception is that it is more likely to happen in the 'real' world.

I don't know as anyone could answer your second question. What you speak of is human-inspired developmental learning on robots. It is a fairly recent approach, some major researchers even now have robot 'children' that they are taking care of and are hoping will develop various qualities through long-term interaction. But it is very difficult to know how to implement, or how the development is to be 'staged' if it is to be staged at all (I mean, as in Piaget). Is hardware/software the limitation? A lot of people I know would say definitely 'yes'. But perhaps not - we just don't know what way is the best way to get 'intelligence' working.

Forward models (1, Interesting)

Anonymous Coward | more than 8 years ago | (#16649891)

Olaf Sporns and Max Lungarella are well-known in this field, however roboticists and others have been looking at the effect of movement on sensory feedback for a while. I remember Rodney Cotterill in his 'enchanted looms' book saying that it was useful to reverse the usual 'sense -> plan -> act' formula to 'act -> expect -> sense' (or something similar). Researchers like Daniel Wolpert, Mitsuo Kawato and particularly Yiannis Demiris use 'Forward models' in robots, cognitive building blocks that take current state (of world & robot) and motor command as input and produce a prediction of the expected resultant state or sensory feedback. These can be used to 'simulate' or 'demo' possible actions mentally before deciding the right one to execute. I know Anthony Dearden has done some work on learning the causal relationship between motor action and sensory feedback that underpins these forward models, using Bayesian networks in robots. A few researchers (e.g. Rick Grush) think that the learning of these motor-vision causal relationships and their use in mental simulations may underpin mental imagery and possibly the observed activity of the 'canonical' and 'mirror' neurons (discovered by Giacomo Rizzolatti and his team) that seem to tie visual perception to the motor system. Germund Hesslow has a theory that simulation of perception and action gives rise to conscious thought. And there are many others.

The field is becoming more interesting than ever, thanks to the advances in robot hardware, and the collaboration of philosophers of mind and neuroscientists with roboticists to construct new cognitive architectures.

Talk about stating the bleeding obvious (2, Insightful)

paxmaniac (988091) | more than 8 years ago | (#16649951)

They used a four-legged walking robot, a humanoid torso and a simulated wheeled robot. All three robots had a computer vision system trained to focus on red objects. The walking and wheeled robots automatically move towards red blocks in their proximity, while the humanoid bot grasps red objects, moving them closer to its eyes and tilting its head for a better view.

Ok, second year mechatronics project there.

To measure the relationship between movement and vision the researchers recorded information from the robots' joints and field of vision. They then used a mathematical technique to see how much of a causal relationship existed between sensory input and motor activity.

What, you mean if you program your robots to go find red things that there will be a statistical correlation between seeing red things and the robot moving? Who'd have thought it??

'We saw causation of both kinds,' Sporns says. 'Information flows from sensory events to motor events and also from motor events to sensory events.'

And this surprised who exactly?

Really they publish some rubbish in NS sometimes.

Re:Talk about stating the bleeding obvious (0)

Anonymous Coward | more than 8 years ago | (#16653159)

I know Olaf and some of his students and have seen his robots at work and read his papers. I assure you that the mutual information networks he is developing are much deeper than could ever be presented in a New Scientist summary or a Slashdot comment, and his stuff is quite good--your dismissal of it shows you really have no clue about what you are saying. And although I personally think the radical embodied cognition arguments are a bit misguided, he is fairly pragmatic about the arguments, even though I think he is wasting his time soldering and tinkering when he could be dealing with real problems. The novel part of this research is the techniques for determining what is interesting and allowing the systems to discover for themselves ways in which to behave that maximize information transfer. As for information flow in both directions, this is actually quite important and provides support for some of the more radical embodied cognition arguments out there. Standard industrial robot systems do not behave this way, and 50 years of psychology theory have focused primarily on the flow of information from perception to action and not vice versa. This is the whole point of building these bots anyway--if perception is essentially passive, you can just show your vision system pictures off the internet; if you think that the intentions and actions of the perceiver are important, this is exactly the way to study it, and the mutual flow of information is exactly the type of evidence that supports this hypothesis. If you want to understand the importance of this finding, read up on "Active Vision".

Re:Talk about stating the bleeding obvious (0)

Anonymous Coward | more than 8 years ago | (#16665659)

You're full of shit, and worse - you don't know it.

babybot (2, Interesting)

mennucc1 (568756) | more than 8 years ago | (#16654865)

a similar project is babybot [unige.it] . Short extract: Our scientific goal is that of uncovering the mechanisms of the functioning of the brain by building physical models of the neural control and cognitive structures. In our intendment physical model are embodied artificial systems that freely interact in a not too unconstrained environment. Also, our approach derives from studies of human sensorimotor and cognitive development with the aim of investigating if a developmental approach to building intelligent systems may offer new insight on aspects of human behavior and new tools for the implementation of complex, artificial systems. (BTW: that project has been around since 2000.... )
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?