Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Babybot Learns Like You Did

Zonk posted more than 8 years ago | from the i'm-still-working-on-not-knocking-things-over dept.

107

holy_calamity writes "A European project has produced this one-armed 'babybot' that learns like a human child. It experiments and knocks things over until it can pick them up for itself. Interestingly the next step is to build a fully humanoid version that's open source in both software and hardware."

Sorry! There are no comments related to the filter you selected.

AI Learning (5, Interesting)

fatduck (961824) | more than 8 years ago | (#15275838)

From TFA: "The goal is to build a humanoid 2-year-old child," explains Metta. This will have all of Babybot's abilities and the researchers hope it may eventually even learn how to walk. "It will definitely crawl," says Metta, "and is designed so that walking is mechanically possible." Not a bad goal at all, and if it's open source they can't cheat by promoting a specific goal such as walking in the software. Reminds me of Prey where they couldn't figure out how to get the nanomachine swarm to fly so they let its AI "learn" how to do it on its own.

Re:AI Learning (3, Insightful)

EnsilZah (575600) | more than 8 years ago | (#15275916)

They may not use a simple goal like walking, but in order to learn there has to be some sort of reward/punishment system in place.
Real babies have goals like getting their parents' attention, being fed, keeping warm.
I wonder what sort of goals a robot baby has to have to learn in the same way a real one does.

Singularity Alert! (0)

Anonymous Coward | more than 8 years ago | (#15276303)


Don't Worry [wired.com] -- it's only the end of the human era.

Re:AI Learning (3, Funny)

bmo (77928) | more than 8 years ago | (#15275939)

TFA Said: "The goal is to build a humanoid 2-year-old child"

You said: Not a bad goal at all

Apparently you've never been around a 2-year old.

--
BMO

Re:AI Learning (0)

Anonymous Coward | more than 8 years ago | (#15276061)

Insert apropriate comment about /.ers not being able to atract a mate to get said 2 year old child.

Then again, there is always adoption.

Will gay robots be able to adopt children?

Re:AI Learning (0)

Anonymous Coward | more than 8 years ago | (#15277771)

Wait 'till they build robotic teenagers: then it will hit the fan.

Teen robots with black trenchcoats. And guns. And lasers on their frakin' heads.

What sound does a baby make in a blender?... (-1, Troll)

Anonymous Coward | more than 8 years ago | (#15276099)

...I don't know, I was jacking off too hard to hear it.

What are the learning principles? (1)

MOBE2001 (263700) | more than 8 years ago | (#15276515)

Not a bad goal at all, and if it's open source they can't cheat by promoting a specific goal such as walking in the software.

Yes. AI scientists have a bad habit of making implausible claims for their creations. The open approach will keep them honest and is to be commended. At the very least, such a robot needs several types of learning functions including perceptual, short and long term memory mechanisms, concept formation, pattern completion, anticipatory behavior, motor learning and coordination, operant and classical conditioning, etc... Does anybody know what sorts of NNs and what learning principles are being used in this bot?

fp (0)

Anonymous Coward | more than 8 years ago | (#15275841)

baby says fp!

Open Source? (4, Funny)

Anonymous Coward | more than 8 years ago | (#15275853)

Aren't you afraid this poor open source robot will get exploited by the other robots, or do the proprietary robots have something to hide? What kind of insults can we expect? Your father was a code monkey and your mother got her card punched by a UNIVAC!

Re:Open Source? (0)

Anonymous Coward | more than 8 years ago | (#15275988)

Aren't you afraid this poor open source robot will get exploited by the other robots...?
There is no PriestBot.

Heh.

I know, I know, that was in bad taste.
I apologize in advance.

Re:Open Source? (1)

Kingduck (894139) | more than 8 years ago | (#15276177)

PriestBot... Is that trademarked and if so, can I borrow the phrase to use at work?

Re:Open Source? (0)

Anonymous Coward | more than 8 years ago | (#15275996)

Patents kill johnny 5....

Hey, let's hurry up on this robot so we can use a 'patents kill little children' line of defense! :)

names (3, Funny)

hyperstation (185147) | more than 8 years ago | (#15275857)

babybot? robocub? fire your marketing people!

Re:names (1)

arrrrg (902404) | more than 8 years ago | (#15276159)

babybot? robocub? fire your marketing people!

Don't fire them, give them a bonus! If they had picked some other boring name, do you really think the article would have, e.g., made in on /.? The name might very well be the deciding factor in their getting continued funding (as sad as that may be).

...and Wii! (1)

tepples (727027) | more than 8 years ago | (#15277200)

If they had picked some other boring name, do you really think the article would have, e.g., made in on /.?

But will Robocub want to play with its Wii?

May? (5, Funny)

Kangburra (911213) | more than 8 years ago | (#15275862)

may mean that such machines can never become as intelligent as us

They don't know and they're playing with it. Have they even seen the Matrix??

even worse still (0)

Anonymous Coward | more than 8 years ago | (#15276024)

..what happens if it gets ghost hacked and wakes up in a prison cell with it's pants down?

'Where am I ?'

Dude (4, Funny)

Umbral Blot (737704) | more than 8 years ago | (#15275869)

A one armed baby bot? That's disturbing on so many levels.

Re:Dude (4, Funny)

rolfwind (528248) | more than 8 years ago | (#15275911)

It doubles as a slot machine....

Re:Dude (1)

mobby_6kl (668092) | more than 8 years ago | (#15276233)

>It doubles as a slot machine...

And a hooker! In fact, forget the slot machine!

Re:Dude (1)

TheRaven64 (641858) | more than 8 years ago | (#15276404)

In the book, The Godwhale, (currently out of print, but worth reading if you can get hold of a copy) there is a slot machine that pays out by stimulating the pleasure centres of the player's brain.

Re:Dude (1)

Locke03 (915242) | more than 8 years ago | (#15279862)

So that means winning is like taking candy from a baby right?

Re:Dude (1)

knn03 (967441) | more than 8 years ago | (#15277661)

It wasn't me, it was the one armed babybot!

Squash (1)

Kangburra (911213) | more than 8 years ago | (#15275880)

LOL When "babybot" goes to grap the ball [unige.it] watch how fast he gets his hand out of the way!

Obviously babybot doesn't know it's own strength! LOL

Obligatory (0)

Anonymous Coward | more than 8 years ago | (#15275883)

I for one, welcome out new Babybot overlords!

Babybot Learns Like You Did (1, Funny)

Quirk (36086) | more than 8 years ago | (#15275884)

So this bot is going to lie in its crib, thrashing its arms and legs, screaming at the top of its lungs, until someone picks it, gives it a full juice bottle, a cookie and walks it around trying desparately to amuse it?

Re:Babybot Learns Like You Did (1)

ian_mackereth (889101) | more than 8 years ago | (#15275915)

Like I did? Using meat?!

Re:Babybot Learns Like You Did (1)

hotdiggitydawg (881316) | more than 8 years ago | (#15276571)

Yeah - Tamagotchi has gone 4D.

On a more serious note, can anyone define "Open source hardware"? Short of publishing blueprints for the chips, how can you open source it? Publishing a parts list and assembly instructions is not open source...

Re:Babybot Learns Like You Did (1)

epp_b (944299) | more than 8 years ago | (#15277090)

Corrected:

So this bot is going to lie in its crib, thrashing its arm and legs, screaming at the top of its lungs, until someone picks it, gives it a full juice bottle, a cookie and walks it around trying desparately to amuse it?

There is more to a 2-year-old than walking (5, Interesting)

Flyboy Connor (741764) | more than 8 years ago | (#15275886)

The goal is to build a humanoid 2-year-old child," explains Metta. This will have all of Babybot's abilities and the researchers hope it may eventually even learn how to walk.

A fun project, and potentially a good step on the road towards human-like intelligence. However, the "2-year-old" remark is again one of those far-fetched promises that is a loooooooooooooong way off. Making a robot-arm play with a rubber ducky is one thing, letting a robot understand what a rubber ducky is, is quite another. Making a robot crawl is one thing, but letting a robot crawl with a self-conscious purpose, again is quite another.

Fortunately, one of the researcher in TFA admits that 20 computers with a neural network on each is no replacement for a human brain. But the 2-year-old remark follows later, and is evidently entered as a way to generate funding. It sounds cool, but it is not what the result of this project will be. I assume the researchers know this all too well. Or perhaps they have no children of their own.

But just think... (1, Interesting)

Anonymous Coward | more than 8 years ago | (#15275937)

Fortunately, one of the researcher in TFA admits that 20 computers with a neural network on each is no replacement for a human brain. But the 2-year-old remark follows later, and is evidently entered as a way to generate funding. It sounds cool, but it is not what the result of this project will be. I assume the researchers know this all too well. Or perhaps they have no children of their own.

Think of how Social Services could use something like this if it can act like a 2 year-old. Do they want to make sure you would be a good parent? They'll give you the robot for a week and based on the data they can then tell if you can be trusted (obviously assuming the robot is unhackable, or at least knows if it was hacked). If that doesn't generate government funding then I don't know what would!

Re:There is more to a 2-year-old than walking (1)

Threni (635302) | more than 8 years ago | (#15276021)

> However, the "2-year-old" remark is again one of those far-fetched promises
> that is a loooooooooooooong way off.

Also, how do we know how a baby learns? Perhaps a more accurate description would have another comma: "It learns, like a 2 year old learns" ?

Can you turn off a 2-year-old? (5, Insightful)

Richard Kirk (535523) | more than 8 years ago | (#15276068)

This particular experiment is not going to create a 2-year old. We have had robots and simulations of robots that have used neutral nets to see if motor skill can be optimised using learning-like techniques. We have had recognition programs that do the same things that our eye and brain system do. This is an intelligent combination of the two.

However, just suppose, and then suppose, and then suppose...

So far, we can build computers that can simulate brain cells. There is nothing stopping us making a computer that has a similar complexity to the brain. We will have to mimic the strange mix of part-design, part randomness that brains are. Or maybe we can just throw more computing power, and stuff the brain doesn't have, like the ability to back up and regress. Sooner or later - probably later is my guess, but who knows? - we are going to come up with something that shows intelligence, and probably has inteligence.

African grey parrots are kept as pets. These are said to be as intelligent as a two-year old. Some of them can understand sentances from a vocabulary of hundreds of words. They don't progress much beyond a two year old. And they are Not Like Us, so it's OK to keep them in cages. Apparently. Hmmm.

One day, someone is going to make something intelligent, and then turn it off, and there will be an outcry. Is anyone doing the thinking on the ethics of making it before making it?

Re:Can you turn off a 2-year-old? (1)

tallniel (534954) | more than 8 years ago | (#15276539)

One day, someone is going to make something intelligent, and then turn it off, and there will be an outcry. Is anyone doing the thinking on the ethics of making it before making it?

Yes, of course people are thinking about this. Philosophers, cognitive scientists and AI researchers all frequently discuss such subjects. But why would turning an "intelligent" computer off cause an outcry? A truly intelligent agent will likely need a substantial amount of memory. This suggests to me that it will involve a persistent disk-based store. So turning off such an agent wouldn't "kill" it; it'd be much more akin to going to sleep (especially if you used this downtime to perform maintenance and hardware upgrades). Simply turn it back on. You'd have to actually wipe or physically destroy the disks in order to "kill" the agent. In addition, the computational power required will probably mean that the agent is simulated on a cluster of machines, rather than any one machine. Thus turning off any one machine may lead to a performance degredation or loss of specific abilities (much like a stroke), but would be temporary rather than permanent.

Re:Can you turn off a 2-year-old? (1)

Flyboy Connor (741764) | more than 8 years ago | (#15276863)

But why would turning an "intelligent" computer off cause an outcry?

I guess it would if you turned the computer off without its consent. The question is how much say a computer has in determining what is done to it. I give a surgeon permission to turn me off for a while if an operation must be performed on me. If I am going to add extra memory to an intelligent computer for which it needs rebooting, I am going to politely ask if it would not mind being turned off for half an hour or so. And I expect the computer would agree, if I can make clear that it would benefit from the move. But I should not do so only because I think it is a good idea.

Re:Can you turn off a 2-year-old? (0)

Anonymous Coward | more than 8 years ago | (#15276615)

So far, we can build computers that can simulate brain cells.

Um, no we can't. We can't even simulate some shitty worm which has 300 neurons and is completely understood.

Re:Can you turn off a 2-year-old? (1)

4D6963 (933028) | more than 8 years ago | (#15276697)

And they are Not Like Us, so it's OK to keep them in cages.

Because we don't keep anyone *like us* in cages maybe?

Re:Can you turn off a 2-year-old? (1)

orasio (188021) | more than 8 years ago | (#15277068)

African grey parrots are kept as pets. These are said to be as intelligent as a two-year old. Some of them can understand sentances from a vocabulary of hundreds of words. They don't progress much beyond a two year old. And they are Not Like Us, so it's OK to keep them in cages. Apparently. Hmmm.


But we keep ywo year olds of our own species, in cages. Haven't you watched "Rugrats"?? they were kept in cages!

Re:Can you turn off a 2-year-old? (1)

batemanm (534197) | more than 8 years ago | (#15278378)

African grey parrots are kept as pets. These are said to be as intelligent as a two-year old. Some of them can understand sentances from a vocabulary of hundreds of words. They don't progress much beyond a two year old. And they are Not Like Us, so it's OK to keep them in cages. Apparently. Hmmm.

Most people keep small children in cages, they just normally refer to them as cribs, cots or playpens. Oh and don't get started on swaddling, okay that is only up to sbout 5 months.

Re:Can you turn off a 2-year-old? (1)

mad_minstrel (943049) | more than 8 years ago | (#15278411)

The people who think about ethics are too busy thinking to actually invent something.

Re:Can you turn off a 2-year-old? (2, Insightful)

mrcaseyj (902945) | more than 8 years ago | (#15278930)

The difficulty is coming up with a consistent ethical policy that is reasonable, and works when relating to bacteria, plants, animals, humans, superior aliens, and machines. It seems obvious that all life including bacteria can't be given human rights. But where do you draw the line between bacteria and humans? If you decide that rats can be killed, experimented on, eaten, etc, then how do you argue that aliens or super intelligent machines shouldn't declare humans insignificantly better than rats, and decide to eat us. The best policy I've come up with is that we should respect the rights of anything that asks for its rights to be respected, and understands what it is asking. The asking part keeps bacteria and plants out of the protected class and the understanding part keeps tape players out. This policy provides grounds for a truce to prevent conflict between intelligent entities. I would also add some safety precautions to the policy, like protecting the rights of all humans from birth, whether they can ask for or understand their rights.

Interesting (1)

digital.prion (808852) | more than 8 years ago | (#15276264)

I question:


What happens when machines reach human level thought speech or better yet surpass it? What then about us becomes obsolete?

Re:There is more to a 2-year-old than walking (2, Interesting)

vertinox (846076) | more than 8 years ago | (#15276403)

A fun project, and potentially a good step on the road towards human-like intelligence. However, the "2-year-old" remark is again one of those far-fetched promises that is a loooooooooooooong way off. Making a robot-arm play with a rubber ducky is one thing, letting a robot understand what a rubber ducky is, is quite another.

How do we know the 2 year old does understand what a ruber ducky is?

Of course their brain may understand the rubber ducky is "that yellow thing... that feels a certain way... has that certain shape... and squeaks when i squeeze it..."

But do they really understand what it is in relationship to other things or true understanding. I mean... Its relationship to where we got it from. We bought it at a store... It was made in china... Its made of rubber or some type of synthetic... It floats because of physical properties... And bears a resemblence to a real life duck (a child of 2 year old might not grasp that key concept yet... think of it like captcha).

At that level a child's pattern recognition is quite limited, but is quite at the stage where it will basically explode with ability to relate verbal words to objects and actions and people.

Still... Understanding until you are older is more or less... This [object] is [this]. Later we learn [object] is [this] and does [action] which causes [result]. And then relationships of [object] with other [objects]. That is what usually throws machine intelligence into a loop. It can recognize patterns, but it can't relate those patterns to other patterns like a human can (at least right now).

Still, I certainly didn't have cognitive memories until I was older than 5 or even 7 where I started asking those annoying parental questions like "Why is the sky blue?" and "Where do people go where they die?".

Re:There is more to a 2-year-old than walking (1)

4D6963 (933028) | more than 8 years ago | (#15276683)

potentially a good step on the road towards human-like intelligence.

Dude, we are so far from a human-like AI, it's like taking a step towards the east and saying "it's potentially a good step on the road towards Moscow". I may be exagerrating a little tho.

Neural Networks (5, Insightful)

EnsilZah (575600) | more than 8 years ago | (#15275900)

The story mentions that the AI is made using neural nets.
I think it's amazing how such simple data structures can generate such complex behaviour.

In case anyone is interested, there's this pretty easy to understand tutorial on neural nets here:
http://www.ai-junkie.com/ann/evolved/nnt1.html [ai-junkie.com]

Re:Neural Networks (1, Funny)

schotter (17230) | more than 8 years ago | (#15275922)

Linked page has blue links on a blue background... Sometimes it'd be nice to encounter some Natural Intelligence.

Re:Neural Networks (1)

Cicero382 (913621) | more than 8 years ago | (#15276026)

I, too, am astonished by some of the results of extremely simple "algorithms". It's called "emergent behaviour" (see http://en.wikipedia.org/wiki/Emergence [wikipedia.org] ). My favorite is the shoal of fish.

Neural nets running on a cluster of computers is quite a lot more complex. I can only hope that they're looking to improve the ANN paradigm to take us that little bit closer to real AI, rather than just using existing techniques to prove a point.

Anyway, I'm going to hunt around for more data on this. It looks interesting - anyone got any links to the source?

Re:Neural Networks (1)

MichaelSmith (789609) | more than 8 years ago | (#15276028)

I think it's amazing how such simple data structures can generate such complex behaviour.

I am amazed that you are amazed. Simple behavior is at the root of _all_ complex systems: simple interactions between molecules give rise to climate. Cells in a finite state machine produce complex emergent behaviour.

Re:Neural Networks (1)

Cicero382 (913621) | more than 8 years ago | (#15276065)

Actually, I think he was referring to the "WOW!" factor that makes science such fun. I know I was.

No need to be superior about it :)

Re:Neural Networks (1)

hyfe (641811) | more than 8 years ago | (#15276151)

I think it's amazing how such simple data structures can generate such complex behaviour.

Me, on the other hand, think it's pretty amazing how simplistic behaviour these basic models can recreate and still be at the forefront of academic research. Simple statistical models outperform AI-techniques on most classification problems any day. They bloody well shouldn't!

Re:Neural Networks (2, Interesting)

arrrrg (902404) | more than 8 years ago | (#15276179)

I'm an AI grad student, and I can tell you that (rather complex) statistical learning methods, which are considered part of AI, blow most simple methods (and neural nets) out of the water on most classification problems these days. In fact, I'm procrastinating from my project involving SVMs [wikipedia.org] right now to write this comment.

Perhaps by AI you're referring just to neural nets? While people get them to do some cool things, these (in the for you're used to seeing them in) are at the very very "dumb end" of AI, in that they don't exploit any of the prior knowledge about a problem. They're easy to understand and quite general, but for most specific problems there are much better AI techniques out there.

Re:Neural Networks (2, Insightful)

hyfe (641811) | more than 8 years ago | (#15276198)

I'm an AI grad student, and I can tell you that (rather complex) statistical learning methods, which are considered part of AI,

That's what I said :)

Perhaps by AI you're referring just to neural nets?

By AI I'm referring to something that is not inheretly (too) bound by the abstractions required to make it work. EG; how easily transferable is the experience from numbers to actualc concepts. Various forms of regression analysis and stuff sure do wonders, but to be honest, they feel so inheretly limited I don't see much hope for them. It's mathemathicians playing with maths, like scripts emulating AI in games are programmers playing with programming, getting neat/good enough results; but still not making actual progress.

I guess all it means is that AI is hard, and I have way too much faith in the people that are supposed to be more intelligent than me.

Re:Neural Networks (1)

jacksonj04 (800021) | more than 8 years ago | (#15276308)

Is that AI in the sense that a bayesian filter doesn't need to know what is trying to sell me stuff and what isn't, it just learns? Or would the data set need to be able to be transferred to things other than plain text?

Re:Neural Networks (2, Insightful)

Helios1182 (629010) | more than 8 years ago | (#15276846)

I think we, the AI community, are making actual progress. The problem is that the problem is much harder than people thought it would be back when it first emerged.

Statistical models have done wonders for a lot of things. Classification, mentioned above, is one of the most obvious successes. Natural language processing is another surprising success of statistical methods. The use of hidden markov models has solved a number of problems that were difficult using symbolic approaches (mostly dealing with syntax). The natural language understanding is still a long ways away of course.

Partially observable markov decision processes have also been used a lot in learning in uncertain environments with good success -- another technique from stats.

The problem with AI as a whole is that there is so much knowledge. It is really incredible how much we know. Not even in an academic sense, you know things will fall, how to balance, and all sorts of "common sense" knowledge. Modeling this in a symbolic way is very difficult because of the large amount of information. It is also hard to express. Formalisms such as first order predicate calculus are often used, but they have limitations.

Statistical models are appealing because we do not have to manually write down knowledge. The machine can learn by itself (to some extent). This is probably why machine learning is one of the hottest topics right now.

So keep faith in the smart people trying to work on AI -- just don't expect true intelligent machines for some time yet. Advances are constantly being made in smaller domain-specific areas though.

Re:Neural Networks (1)

Zaphod2016 (971897) | more than 8 years ago | (#15276219)

If you are still reading this thread, 2 quick Q's:

Which school?

and

Would you recommend it?

(B.S. shopping for grad schools)

Re:Neural Networks (0)

Anonymous Coward | more than 8 years ago | (#15276249)

I go to UC Berkeley, it comes highly recommended. Maybe it doesn't have quite the funding of the other members of the big four [usnews.com] , but has excellent, mostly very friendly faculty and students, not to mention the Bay Area is a great place to live.

Re:Neural Networks (1)

maxjenius22 (560382) | more than 8 years ago | (#15277096)

I recommend Carnegie Mellon for the same subject. Try the Center for Automated Learning and Discovery.

http://www.ml.cmu.edu/ [cmu.edu]

Re:Neural Networks (0)

Anonymous Coward | more than 8 years ago | (#15276733)

I'd wager my brain can outperform your SVM at most relevant pattern recognition tasks.

Re:Neural Networks (1)

maxjenius22 (560382) | more than 8 years ago | (#15277886)

If by "relevant" you mean "relevant to humans" you would often be right by definition, since classifier performance is often measured relative to a human baseline. SVM is a hell of a lot faster as classifying though.

However, I have known SVM to outperform humans on some tasks, such as identifying genes correlated with cancer diagnoses.

Re:Neural Networks (1)

maxjenius22 (560382) | more than 8 years ago | (#15277106)

Wouldn't Logistic Regression [wikipedia.org] be faster and produce equally good results? At least, with text classification that usually seems to be the case.

Re:Neural Networks (0)

Anonymous Coward | more than 8 years ago | (#15276639)

That tutorial is ridiculous. It doesn't even do vanilla backpropagation, which is not that great of an architecture to begin with. Neurons are not reproducing populations of cells that try out random weights that simply die off if they are not fit.
It's clear the author is not trained in neural networks. If you really want intros to non-biological neural networks, buy one of these books:

Pattern Classification - Duda Hart Stork
Neural Networks for Pattern Recognition - Chris Bishop
Neural Networks: A Comprehensive Foundation - Simon Haykin

Keep in mind that many neural net architectures (backprop, RBFs..) violate known properties of biological neurons. just because there are simple processing units wired together doesn't make it biologically plausible. e.g. (most) neurons aren't DC - they spike - only time will tell if these properties are necessary.

Artificial Intelligence (-1, Troll)

bmo (77928) | more than 8 years ago | (#15275932)

AI is bogus.

--
BMO

Babybot (1)

EnsilZah (575600) | more than 8 years ago | (#15275941)

How long until it learns how to frag?

Re:Babybot (1)

Gax (196168) | more than 8 years ago | (#15277634)

How long until it learns how to frag?


Never. The ultimate quake setup requires two hands - one on the mouse and other on the keyboard.

Cmdr Data (1)

mikesd81 (518581) | more than 8 years ago | (#15275951)

Is this the offspring of Data and Tasha Yar?

Re:Cmdr Data (0)

Anonymous Coward | more than 8 years ago | (#15275978)

I didn't know that was possible. But "Fully Functional" does mean fully functional.

The 'conscience' of the BabyBot (0)

rpiquepa (644694) | more than 8 years ago | (#15275984)

This project was born from an engineering approach to the problem of what is consciousness. This is with this problem in mind that the European engineers designed BabyBot. And their experiments, while promising, don't solve entirely the problem of the definition of what is consciousness. So they're now designing new robots like the iCub. Read more for additional details and pictures [zdnet.com] of BabyBot and its successor, the iCub robot.

Re:The 'conscience' of the BabyBot (1)

smoker2 (750216) | more than 8 years ago | (#15276223)

The 'conscience' of the BabyBot
This project was born from an engineering approach to the problem of what is consciousness.
Conscience may be a root of the word consciousness, but in general usage, it usually denotes moralistic sentiment :
As science means knowledge, conscience etymologically means self-knowledge . . . But the English word implies a moral standard of action in the mind as well as a consciousness of our own actions.

--Whewell.
1913 Webster

I don't think anybody's expecting to develop an artificial intelligence that has moral values, at least not for a very long time.

Wow. (4, Interesting)

Dare (18856) | more than 8 years ago | (#15275987)

I wonder what happens when this bot discovers that it's a physical object, and can try and manipulate itself.

(... yeah, baby robot masturbation... but no, seriously...)

What is Open Source Hardware? (1)

Ohreally_factor (593551) | more than 8 years ago | (#15276017)

From TFA:

"Everything about it will be open source, including the hardware, so anyone can use it in their own work," Metta says.

I'm unclear on this concept. Do they mean off the shelf commodity parts? Blueprints so that you can machine the parts yourself, if you have a lathe? Or is open source going to become a euphemism like "five finger discount"?

Seriously, what is Open Source Hardware, if it's not just a sorry misuse of a buzzword?

Re:What is Open Source Hardware? (1)

ichigo 2.0 (900288) | more than 8 years ago | (#15276101)

I'm guessing they'll release the blueprints.

Re:What is Open Source Hardware? (2)

Zaphod2016 (971897) | more than 8 years ago | (#15276441)

Seriously, what is Open Source Hardware, if it's not just a sorry misuse of a buzzword?

Valid point, but please don't let that detract from the benefits of this. As a part-time "tinkerer" myself, I for one am happy to know that not *everyone* in this world is patent-obsessed.

After all, how can we stand on the shoulders of giants when those same giants keep stepping on the little guy?

Getting people interested (0)

ecorona (953223) | more than 8 years ago | (#15276090)

To get people interested in cyborgs or androids we must make them look human. We should start by making furry versions of the Aibot dog or whatever it's called.

Its about anthropomorphism (1)

mustafap (452510) | more than 8 years ago | (#15276412)

>To get people interested in cyborgs or androids we must make them look human

There is another side to doing that; When something looks human, we are more likely to attribute human like qualities to its action. Anthropomorphism. Works with animals too, ie Aibo.

MIT were doing some great work on this, and social computing, at the MIT Media lab in Dublin before it was shut down. I was lucky enough to see some of their ideas in action.

Real shame to see them go, I hope the work gets picked up elsewhere.

how many dead babybots... (0, Troll)

RealGrouchy (943109) | more than 8 years ago | (#15276103)

It learns by trial and error, eh?

How many dead babybots does it take to learn to use Windows?

- RG>

Re:how many dead babybots... (1)

prencher (971087) | more than 8 years ago | (#15276161)

1. The other 30 died trying to install linux.

Sounds perfect to me... (1)

Zaatxe (939368) | more than 8 years ago | (#15276149)

It doesn't need diapers and doesn't cry during the night. Put a second arm on it and tell me when it hits the market, I'm buying one!

Can you say.... (1)

PTK502 (924839) | more than 8 years ago | (#15276156)

Ok call me a scifi nut but who on /. isnt? But can you say Cylon? Him first we start with Babybot, then crawlingbot, then a Walking Chrome Toaster, then 12 new human like models. All beleiving that there creator is flawed and is now believing in our God or Gods pending your religion.....

Simple algorithm (3, Funny)

vagabond_gr (762469) | more than 8 years ago | (#15276165)

It experiments and knocks things over until it can pick them up for itself.

You don't need an advanced AI to do that, the algorithm goes like this:


while(1) {
    throw_toy();
    while(!toy_is_back())
        cry_loud();
}

Re:Simple algorithm - ver 2.0 (1)

Zaphod2016 (971897) | more than 8 years ago | (#15276447)

while(1) {
throw_toy();
while(!toy_is_back())
cry_loud();

if (mom_leaves) {runsilent();}
}

Trust me. Robot or not, its the oldest trick in the book.

Re:Simple algorithm (1)

DigiShaman (671371) | more than 8 years ago | (#15279873)

With true AI, it learns based on example and stores such memories as algorithms. Over time, such algorithms can be modified and honed for specific skill sets. While you could design something that acts like AI with a dictionary of predefined algorithms, it's still not AI...it's an illusion of AI. If you ask me, that defeats the purpose of AI research.

Video (1)

mapkinase (958129) | more than 8 years ago | (#15276370)

Have anyone seen the video?

I have seen 2 (all?) of them and I have noticed that the bot had to rest his hand on the surface everytime he fails the task before attempting again. Why does it have to do that?

Also. At first I have noticed that the bot drops objects into the hand of the researcher. But later I have noticed that it just drops it in the particular place (second video, pile of objects on the right at the level of the babytable). I guess the reasearcher sticks his hand so the object drops into his hand to make the behavior of the robohand look like he specifically drops the object into hand thus creating a wrong impression. I would advise to make more scientific and less marketing presentations next time, so people could learn from it.

Re:Video (1)

dlcarrol (712729) | more than 8 years ago | (#15277606)

I'm just guessing, but it's likely b/c you have to do certain things for safety when working with robots, even (especially?) in research. Getting positioning like that is very, very hard without constant homing and range checking. I imagine it would also be difficult to "learn" unless you tried it the same way until you got it right.

Making babies (4, Funny)

thewiz (24994) | more than 8 years ago | (#15276371)

"The goal is to build a humanoid 2-year-old child," explains Metta.

There is a far easier and more pleasant way to create a child.
Unfortunately, it requires 2 years, nine months, and three minutes.

But can it say... (1)

corychristison (951993) | more than 8 years ago | (#15276373)

Kiss my shiny metal ass!

-- had to. :-P

fully open source? (1)

Bing Tsher E (943915) | more than 8 years ago | (#15276392)

the next step is to build a fully humanoid version that's open source in both software and hardware."

You mean, one where the microcode for any processor included in it is published openly, and the masks used at the chip foundry are also openly published? Or if it's a FPGA 'Free Hardware' design, all design details of the FPGA silicon are disclosed, and all of the code for the FPGA development software is open source (good luck)?

Nice but ... (1)

formant (852164) | more than 8 years ago | (#15276415)

...does it have the memory of an elephant [londonist.com] ??

ouch (2, Funny)

icepick72 (834363) | more than 8 years ago | (#15276523)

That baby would be tough on the birth canal.

Pain (2, Interesting)

Onuma (947856) | more than 8 years ago | (#15276595)

I don't believe they'll truly make a human-esque robot until they can make it understand pain.

Sometimes a child needs to have a hand across his/her hiney to teach him. What if the bot touches a hot stove and melts the crap out of its hand - without pain it would not know the difference.

Let a robot go through that, and then they might truly begin to learn like a human being.

Re:Pain (1)

payndz (589033) | more than 8 years ago | (#15277018)

Simpsons did it: "Why? Why was I programmed to feel pain?"

I'm still a baby... (1)

Harlow_B_Ashur (35202) | more than 8 years ago | (#15276737)

you insensitve clod.

In other news... (1)

BumpyCarrot (775949) | more than 8 years ago | (#15276787)

A disturbing number of murders have occurred in the LIRA labs at the Genoa University. Victims appear to have been strangled, but a lack of fingerprints makes identification of the suspect problematic.

Closed Source Hurting Public Good (1)

Bastian227 (107667) | more than 8 years ago | (#15276792)

I applaud their work towards an open-source model. The model this is derived from--aka "human"--has been closed source since its creation almost 6000 years ago. The copyright expired long ago, but its Creator is unwilling to open its source. Many people cannot find the Creator, and some even doubt He is still around to release the source.

The human model has proven difficult to reverse engineer. We need its source to help fix bugs. For example, it's susceptible to viruses in its current state.

So, I welcome the open-source model. It is a giant step in the right direction. I hope one day we can replace all closed-source models with their open-source equivalents.

Re:Closed Source Hurting Public Good (1)

evilneko (799129) | more than 8 years ago | (#15279062)

if I still had mod points...

Wow! (1)

KimmoKM (833851) | more than 8 years ago | (#15277010)

A droid wich is able to RTFM and STW? It seems droids are now more intelligent than most humans.

I see but.. (1)

IlliniECE (970260) | more than 8 years ago | (#15277069)

Can you teach it slam doors when its angry?

Obligatory (1)

epp_b (944299) | more than 8 years ago | (#15277070)

But will it run Linux?

Yikes, I just looked at the picture! (1)

epp_b (944299) | more than 8 years ago | (#15278566)

Why did they give it Mick Jagger's lips and Keith Richards' eyes?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?