×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Robots Successfully Invent Their Own Language

CmdrTaco posted more than 2 years ago | from the killall-humans dept.

Robotics 159

An anonymous reader writes "One group of Australian researchers have managed to teach robots to do something that, until now, was the reserve of humans and a few other animals: they've taught them how to invent and use spoken language. The robots, called LingoDroids, are introduced to each other. In order to share information, they need to communicate. Since they don't share a common language, they do the next best thing: they make one up. The LingoDroids invent words to describe areas on their maps, speak the word aloud to the other robot, and then find a way to connect the word and the place, the same way a human would point to themselves and speak their name to someone who doesn't speak their language."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

159 comments

I for one... (0, Funny)

Anonymous Coward | more than 2 years ago | (#36167256)

nah...

oblig (0, Offtopic)

SlashdotWanker (1476819) | more than 2 years ago | (#36167268)

I, for one, welcome our robotic overlords

Re:oblig (-1)

Anonymous Coward | more than 2 years ago | (#36167428)

damit, you beat me to this. hahahahahahahaha

Re:oblig (0)

Anonymous Coward | more than 2 years ago | (#36168808)

Judgement Day just got a little closer.

better link (3, Informative)

roman_mir (125474) | more than 2 years ago | (#36167290)

better link [ieee.org]. Also, I didn't realize it at first, this is the person mostly responsible for it [uq.edu.au]. She is from Australia and she decided to do this. I wonder what the catch with her is...

Re:better link (1)

smelch (1988698) | more than 2 years ago | (#36167528)

The catch is there is one of her and hundreds of nerds practically rolling their dicks out at her feet.

Re:better link (1)

roman_mir (125474) | more than 2 years ago | (#36167564)

the very idea of that picture is disturbing. Would she then step on the dicks as she tried to walk anywhere? What about the balls? Ouch!

Re:better link (1)

smelch (1988698) | more than 2 years ago | (#36167652)

What, you're not in to that sort of thing?

Re:better link (1)

roman_mir (125474) | more than 2 years ago | (#36168130)

I came up with something that is not sensible to say now, I think it should be said though, it's important history in the making:

If a man can be womanizer, can a man be a mananizer? An onanizer? A nonanizer?

Always thinking.

Re:better link (0)

donotlizard (1260586) | more than 2 years ago | (#36167682)

I'm pretty sure she transferred from Toronto Institute of Technology and Science.

Re:better link (0)

roman_mir (125474) | more than 2 years ago | (#36167728)

I don't see it. Here is some info [uq.edu.au] Also from this page I now understand the catch.

Re:better link (0)

Anonymous Coward | more than 2 years ago | (#36167916)

Woosh.

Re:better link (1)

roman_mir (125474) | more than 2 years ago | (#36167954)

Oh, please, like I am supposed to pay attention to every capitalization that takes place on this site.

By the way, if you look at that page and realize what I mean by "I understand the catch", then the 'woosh' would be over your head.

Re:better link (1)

DJLuc1d (1010987) | more than 2 years ago | (#36168128)

This thread is now about stalking some nerd girl.

Re:better link (2)

roman_mir (125474) | more than 2 years ago | (#36168258)

yeah, we used to talk about Natalie Portman here. Either we are out of grits or our standards are slipping or maybe it's the age showing.

Re:better link (-1)

Anonymous Coward | more than 2 years ago | (#36168386)

An ugly one at that...

Re:better link (-1)

Anonymous Coward | more than 2 years ago | (#36168132)

The catch is her face. There's a picture on the same link you posted.

Re:better link (1)

dsleif (2163084) | more than 2 years ago | (#36168348)

Ah come on, I'm sure she will measure up to the standards of many of /. commenters. Let them have their fun :)

Misleading headline (1)

Anonymous Coward | more than 2 years ago | (#36167322)

The headline (and summary) are misleading. Here's a more accurate headline:

"Robots programmed to carry out a specific task perform said specific task"

It sounds much less impressive that way - and it is. It's still interesting, but don't infer anything from the whole thing that can't logically be inferred from it.

Re:Misleading headline (5, Insightful)

Moryath (553296) | more than 2 years ago | (#36167514)

After looking through the research, you're correct - the article's claims are very much overblown.

Do they "invent" random words for places? Yes, by throwing random characters as a preprogrammed method. Do they "communicate" this to another robot? Yes.

Is the other robot preprogrammed to (a) accept pointing as a convention and (b) receive information in the "name, point to place" format: Yes.

They share a common communication frame. That's the "language" they communicate in. And it was preprogrammed to them. That they are expanding it by "naming places" is amusing, but it's hardcoded behavior only and they could just as easily have been programmed to select an origin spot, name it "Zero", and proceed to create a north-south/east-west grid of positive and negative integers and "communicate" it in the same fashion.

Re:Misleading headline (1)

fbjon (692006) | more than 2 years ago | (#36167694)

You bring a good point. However, there are dead languages that we humans are unable to figure out, even though we're the same species.

If you don't hardcode something, this would be even worse: How do you make up a new language with grammar and all, without using any prior language or knowledge? You basically have to figure out a general algorithm for bootstrapping communication from scratch.

Re:Misleading headline (0)

Anonymous Coward | more than 2 years ago | (#36167996)

I know a guy who was doing some research involving nurturing in robots. They had rf transmitters hooked up to part of a neural network and were supposed to learn how to use this to communicate stuff to each other. I am not sure how far that research got though (I heard about it in the 2010 winter when it was in progress.)

Re:Misleading headline (0)

Anonymous Coward | more than 2 years ago | (#36168848)

I find it unlikely that language is genetically hardcoded into humans, and babies can learn very quickly even though they don't have knowledge of any other language before. What they do is 'empathic learning', trying to guess what the other one is talking about and associating it with the words. Wich is exatly what these robots do, they were taught empathy but not language.

Re:Misleading headline (1)

Aryden (1872756) | more than 2 years ago | (#36167752)

Yes but in essence, we as humans are "programmed" do do exactly the same. Our parents will point at pictures, objects, people etc and make sounds which are then converted by our brains into words that label the image, object, person etc.

Re:Misleading headline (1)

Moryath (553296) | more than 2 years ago | (#36168230)

We still have to pick up on the meaning of "pointing."

In some cultures, that's not polite to do [manataka.org].

So no, "pointing" isn't hardwired. It's something babies will pick up if their parents do it, perhaps. but it's not hardwired. About the only thing hardwired is babies crying for attention.

Re:Misleading headline (1)

Aryden (1872756) | more than 2 years ago | (#36168280)

Which is another way of them pointing to themselves. Either way, it's a method for drawing attention to a specific object, place, or thing.

Re:Misleading headline (0)

Anonymous Coward | more than 2 years ago | (#36168392)

Not even crying is hard wired. It's a learned association that when they do cry, they get attention. My daughter never learned to cry for attention because she never had to. She would make noises like "eh" and we'd respond. She only ever cries when she hurts herself. I could believe that crying out of pain is hardwired.

Re:Misleading headline (0)

Anonymous Coward | more than 2 years ago | (#36168742)

Indeed, these robots have nothing to do with language, what they do is define places to each other by showing them. Wich would be still cool if they didn't try and go with the sensationalism.

Re:Misleading headline (0)

Anonymous Coward | more than 2 years ago | (#36168826)

This is exactly my problem with artificial intelligence in general. The idea of robots that can mimic human behavior and can give a convincing appearance of reacting to things the way a human would is fascinating, but it's all just pre-programmed instructions. When the instructions are complex enough it gets more difficult to understand how it works, and dynamic programming can cause behaviors you didn't anticipate, but none of this means the robots can turn against their creators or learn things they're not programmed to learn. If a robot does something against the wishes of its creator, it's a bug, not an inherent flaw in the concept.

Robots (read: computers) cannot actually think for themselves. They can be programmed to give the appearance of thinking for themselves, which is awesome and exciting, but it's an illusion, a trick.

Re:Misleading headline (1)

Nethemas the Great (909900) | more than 2 years ago | (#36167838)

There's more to what they've done than you are perceiving. The robots running around following their "instructions" are proving a solution their creators invented for solving a problem given a set of constraints. Namely, using auditory communication only, develop a means of sharing a common understanding about a physical space. This is a step towards developing sophisticated communication capabilities between not just other robots, but more importantly humans using their protocols rather than traditional machine protocols.

R2D2 (0)

Anonymous Coward | more than 2 years ago | (#36167372)

R2D2 was doing this 30 years ago.

Linguo (0)

Anonymous Coward | more than 2 years ago | (#36167374)

IS dying...

better article (0)

Anonymous Coward | more than 2 years ago | (#36167398)

Is there an article out there with more information and fewer jokes?

No they did not. (1)

Lumpy (12016) | more than 2 years ago | (#36167422)

They learned how to communicate meaning. The researchers taught them the words. the computers on board did not invent the words they used. In fact a computer would not do something as dumb as a spoken word but series of tones or even FSK.

Re:No they did not. (1)

Patch86 (1465427) | more than 2 years ago | (#36167580)

The researchers taught them the words. the computers on board did not invent the words they used.

My understanding of the article is that the robot's did exactly that. The programmers put two robots together that they had intentionally not given any specific words to (although presumably the basic rules for how to form words must have been given, which you might perceive as the analogue to humans having a physically limited vocal range to play with). The robots then trial-and-errored their way through "conversations" until they had established a common set of words for locations, directions etc..

If you have a better article (TFA was pants) that contradicts that, I'd be thrilled for a link.

Re:No they did not. (2)

Quantus347 (1220456) | more than 2 years ago | (#36167630)

They learned how to communicate meaning. The researchers taught them the words. the computers on board did not invent the words they used. In fact a computer would not do something as dumb as a spoken word but series of tones or even FSK.

When it needs a new word/label it generates it as a random combination of pre-programmed syllables that play the role of phonemes for the new language. English for example only uses 40 of them, but we combine them to make all the various words we know how to pronounce properly. It may not be a particularly sophisticated language, but I think it still counts well enough.

Re:No they did not. (1)

Nethemas the Great (909900) | more than 2 years ago | (#36168018)

Actually they did "invent" the words, however these robots were constrained to using human derived syllables. The goal was to not produce a machine efficient, machine natural language, but rather one that is compatible/aligned with human speech and understanding. The end goal of this line of research is to create the ability for machines to have meaningful communication with humans absent a mechanism of query/response translation limited to preprogrammed states.

Re:No they did not. (0)

Anonymous Coward | more than 2 years ago | (#36168084)

depends if the robots wanted a 3rd party to understand what they are saying.

Can you have an opinion that isn't short sighted or wrong, Tim?

Re:No they did not. (1)

brokeninside (34168) | more than 2 years ago | (#36168116)

Did they learn how to communicate meaning?

It sounds to me like they were programmed how to ostensively code and decode tokens. If it were the case that meaning is entirely reducible to ostensive definitions, then it is the case that they learned to communicate meaning. I'm not certain that many (if any) linguists, philosophers of language, or psychologists hold to an ostensive theory of language these days. Wittgenstein pretty much exploded the ostensive theory of language in such a way that no one takes it seriously.

Not language (1)

CRCulver (715279) | more than 2 years ago | (#36167434)

This is more about the creation of a community hash table than language. Language allows the expression of contradictory ideas and ambiguity, e.g. Chomsky's famous "Colorless green ideas sleep furiously". These robots are just connecting locations to variables.

Re:Not language (1)

Zerth (26112) | more than 2 years ago | (#36167848)

That they can distinguish that "random_syllables" means "this point" instead of "50 units of movement" or "north" or "left" is moderately impressive.

Australian Lingo Robots (3, Funny)

Anonymous Coward | more than 2 years ago | (#36167442)

A lingo ate my baby!

Prescriptive language (1)

myoparo (933550) | more than 2 years ago | (#36167444)

I wonder how long until a prescriptivist control-freak robot develops to rule over the language and erase all usage that it disagrees with.

Is it machine language? (1)

digitaldc (879047) | more than 2 years ago | (#36167454)

Is it machine language? Because all I hear them saying are 'ones and zeroes.'

Re:Is it machine language? (4, Insightful)

roman_mir (125474) | more than 2 years ago | (#36167680)

pize, rije, jaya, reya, kuzo, ropi, duka, vupe, puru, puga, huzu, hiza, bula, kobu, yifi, gige, soqe, liye, xala, mira, kopo, heto, zuce, xapo, fili, zuya, fexo, jara.

The 'language' seems to be limited to 4 letter words, each one has a consonant and a vowel, and then another consonant and another vowel in it. Does not look like a language at all, there is no grammar, there is nothing except basically 4 letter words used as hash keys to point at some areas on a map.

Re:Is it machine language? (0)

Anonymous Coward | more than 2 years ago | (#36168110)

There is grammar. but it is not created by the robots. The protocol and communication format is the grammar and that was created by humans. To draw an analogy (without cars), it's as mundane as discovering a new species or a new star and agreeing on a name. Wake me up when robots create their own grammar.

Re:Is it machine language? (1)

roman_mir (125474) | more than 2 years ago | (#36168224)

No, I think grammar is about legality of the sequences of letters within a sentence, not about the sequence, of exchanges of sentences between communicating parties.

Re:Is it machine language? (0)

Anonymous Coward | more than 2 years ago | (#36168274)

What? Shit. Only a hash of four per word, tops. Darn.

Re:Is it machine language? (0)

Anonymous Coward | more than 2 years ago | (#36168308)

Human language began with humans associating sounds they made with objects. Afterwards, they associate sounds with conceptual things like actions. It's only they they combined objects with actions into one meaning that grammar is developed for consistency and ease of understanding.

It probably took humans an insane amount of years before such things as grammar was developed slowly passing on each advancement to each of their generation.

You think robots can achieve something better then humans instantly? Of course this is just pre-programmed logic designed with this purpose in mind so how much cheating vs how much real adaptic logic is in their is hard to say.

But to say it's not a language, a language is but a method to communicate no matter the form of sound used. It's simply a primitive one at best.

Re:Is it machine language? (1)

roman_mir (125474) | more than 2 years ago | (#36168338)

However in humans languages developed out of need.

Do these machines need anything? Do they understand that they need anything?

Unless there is some need, that the machines are experiencing, understanding it and trying to solve it, they won't be developing anything more complex than hash keys to areas, exactly as programmed.

Re:Is it machine language? (0)

Anonymous Coward | more than 2 years ago | (#36168576)

True, the language is all nouns, and is thus very primitive, but most people learn nouns first because they are the easiest to teach. I would guess most languages start off as all nouns and grammar doesn't develop until later.

Why make them like us? (0)

Anonymous Coward | more than 2 years ago | (#36167466)

So why do we want to make robots as human as possible, other than just to prove we can do it? Aren't there enough humans on the planet already? Robots are our tools. Why do we want to make them our equals? Is anyone in robotics and AI at all concerned that we're on the path to possibly replacing ourselves? Evolution proceeds at a very slow rate by comparison. Genetic engineering and human augmentation seem to be running far behind overall computing advances. It seems more likely that we can make a machine far smarter and capable than a human in the next several decades than we can enhance humans to keep up. So what happens then?

Re:Why make them like us? (1)

newcastlejon (1483695) | more than 2 years ago | (#36167566)

I think the two main reasons are these:

We have other tools, and it's convenient to have robots able to use them as well as humans.

Debatably, having robots that are easier for humans to relate to will it easier for the public to accept among them. Perhaps as a side-effect if we have less trouble anthropomorphising them there'll be less bloodshed (I hope not literal) when the sentient ones start asking for us to extend the idea of human rights to them.

Re:Why make them like us? (1)

smelch (1988698) | more than 2 years ago | (#36167592)

When the robots are better than us we adopt socialism and go on vacation. Everybody will command their robots, they won't think for themselves, but we won't have to do any work. At that point socialism makes sense and working is not needed.

Re:Why make them like us? (1)

GeorgeMonroy (784609) | more than 2 years ago | (#36167892)

Only the wealthy will have robots and then there won't be a need for most humans. The human population will dwindle.

Re:Why make them like us? (1)

smelch (1988698) | more than 2 years ago | (#36168012)

Did you miss the part about socialism? There are more poor people than rich people. I guess the rich could have private fleets of robots that supress the human population, but somehow I think the socialism will come between the robot slaves and the robot armies.

Re:Why make them like us? (1)

GeorgeMonroy (784609) | more than 2 years ago | (#36168238)

No I did not. Think about it though for a little more. They will make a few robots that will replace some workers. People will lose jobs. With fewer people working where will the money for socialism come from? I do not know about you but I do not have any faith that the wealthy will pay for socialism. If that was the case then it would already be like that. So people lose jobs and starve out. More robots will be made and more people will lose jobs. The poor will become less and less. The middle class will be less and less. More robots and less people. I am just not optimistic about socialism in this scenario. There would be nothing in it for the wealthy. So few will inherit the Earth. Robots will make more robots and you will not need humans for anything other than procreation.

Re:Why make them like us? (0)

Anonymous Coward | more than 2 years ago | (#36167614)

Mostly because it's challenging.

However among the useful results of robots with human level capabilities would be mass producing experts. It takes a long time to train a human to where they can be considered an expert in a non-trivial field. However it's easy to copy the state of a robot's software and install it in a new body thus making a new expert.

Also by figuring out how to make a system that acts human we gain potential insight into why we act the way we do.

As to a fear of replacing ourselves. Well I'm going to need replacing someday, is it relay any worse if I'm replaced by a robot, than if I'm replaced by a younger human?

Modems training? (2)

vlm (69642) | more than 2 years ago | (#36167496)

From the summary, it sounds like the "language" is just a noun mapping. Very much like my 14.4 modem did in 1993 over a phone line, when it came to an agreement with the modem on the other side about what voltage and phase pattern corresponded to the bitstream 0001 vs 1010, in fact my modem sounds like a more complicated language because they implemented MNP4 / MNP5 error correction, admittedly that required a lot of help from the humans typing in the "right" dialer strings and of course the humans who wrote MNP4 ...

Might just be a bad summary of a summary of a summary of a summary, and the robots had developed interesting sentence structure and verb conjugations and direct and indirect objects, adjective and adverbs, similes and metaphors, better than your average youtube comment ... Or maybe youtube comments are actually being written by these robots, hard to say.

Typical Robot Research (1)

Trip6 (1184883) | more than 2 years ago | (#36167520)

So much robotics research is to make machines do what people already do. How self-centered. Most of the time this is not useful to solve real problems. But it does get funded, because those with the pursestrings can understand what humans do, but not the best solution for a robot to do a specific task.

In this case, a simple serial port between the machines would have them communicating and finding common ground much more efficiently than all the mics, speakers, and other mechanics needed to emulate speech.

Re:Typical Robot Research (1)

tepples (727027) | more than 2 years ago | (#36167600)

So much robotics research is to make machines do what people already do.

Often because trying to do the same with people [wikipedia.org] would violate the mainstream community's standard of ethics.

But it does get funded, because those with the pursestrings can understand what humans do, but not the best solution for a robot to do a specific task.

That and because figuring out how to make a robot communicate like a human contributes to the knowledge of human-computer interaction [wikipedia.org].

In this case, a simple serial port between the machines

...wouldn't work so well for robotic machines that can move about.

Re:Typical Robot Research (1)

hedwards (940851) | more than 2 years ago | (#36167644)

The key there is most of the time. There are definitely going to be times when having a robot that can talk is going to be of serious importance. For instance rescue missions where it's too dangerous to send humans in, but where there is still a need to rescue somebody. In situations like that you're not likely to have access to a serial port, and likewise if you're wanting to have two robots coordinating with a person in a situation like that, the robots likely will understand themselves better over a serial connection, but not if a human also needs to be in on the talk.

Re:Typical Robot Research (0)

Anonymous Coward | more than 2 years ago | (#36167690)

In this case, a simple serial port between the machines would have them communicating and finding common ground much more efficiently than all the mics, speakers, and other mechanics needed to emulate speech.

wires can get entangled,

IR need line of sight,

mics and speakers are cheaper than some wireless solutions.

By the way, spoken language in binary form or ???

Re:Typical Robot Research (1)

geekoid (135745) | more than 2 years ago | (#36167868)

Lets see about that.

1) Robotic research into what humans can do help us understand how humans do it.
2) It allow us to create better robots to do thing humans can't do. say, move about Mars.
3) This is simpler then using a serial connector from different manufactures. Hey, what's there OS doing with the firs NAK, do we need to send 2?
I've seen this when getting a linux robot to try and talk to a Dos based robot. the Dos system was dropping a the first signal. SO had we not figured that out, communication would be possible.
You could send a robot into a hostile area, communicate with a device there with a language they invent, and then translate it into the appropriate language.

Why you think a serial port means system can figure out there own how to communicate is beyond me.

Stupid trolls.

Re:Typical Robot Research (1)

VortexCortex (1117377) | more than 2 years ago | (#36168042)

So much robotics research is to make machines do what people already do. How self-centered. Most of the time this is not useful to solve real problems. But it does get funded, because those with the pursestrings can understand what humans do, but not the best solution for a robot to do a specific task.

In this case, a simple serial port between the machines would have them communicating and finding common ground much more efficiently than all the mics, speakers, and other mechanics needed to emulate speech.

I find it a bit comforting that with enough research, and effort, our robotic creations -- that carry our human signature if not in form, then in design -- will be self replicating out in the asteroid belt and beyond. Long after we've been extincted by a medium sized asteroid collision (due to lack of funding for human extra-planetary exploration), the machines we build in the near future may someday encounter another race (that was less concerned with economics), and allow the forgotten footprints of our existence to be re-discovered, archived, and perhaps preserved for posterity.

P.S. Please inscribe our DNA, and it's chemical makeup on all future space probes.

Yours truly,
Member of a soon to be extinct species.

Let me know ... (1)

savi (142689) | more than 2 years ago | (#36167568)

When they invent the subjunctive.

Also, it's not inventing a language if they're programmed to do it. Let me know when the robots building cars on an assembly line start unexpectedly communicating with each other in ways that communicate concepts/ideas that were not hardcoded into them.

Re:Let me know ... (1)

geekoid (135745) | more than 2 years ago | (#36167772)

So if two people meet and come up with their own language they don't actual invent it because they are hardwired(programmed) to communicate?

And you really don't see the advantage to this? This would mean the any two devices could come up with their own independent language on the fly. Basically a way to universally communicate between all devices.

So device A is set to device B. Both made by separated manufactures.
Device they could create a language, communicate and then you device can translate it into your language.

It's a universal communication protocol. Any device with this can talk to any other device.

And yes, there ahve been situation where different algorithms produce emergent bahaviour.

Re:Let me know ... (1)

Livius (318358) | more than 2 years ago | (#36168272)

Humans (at least children) are very much programmed to invent language, and there are documented examples of just that.

What the robots are doing is:

1) Very, very impressive and very, very cool, but

2) Still vastly different from what human language does, and perhaps not even on the right track with respect to the human language faculty. Humans use language to model reality and only then communicate (i.e. share their mental model), and humans can also model things without direct sensory perception (e.g. the predator hiding behind the bush) or even things which don't exist at all.

Directly Lifted From +4, Informative (0)

Anonymous Coward | more than 2 years ago | (#36167582)

Y Combinator [googleusercontent.com].

Yours In Osh,
K. Trout, C.I.O.

Make words up eh? (0)

Anonymous Coward | more than 2 years ago | (#36167610)

Robot 1: Describe the area on the map for me please.
Robot 2: The area on the map is like the size of your mom's penis with an chia pet growing on it.
Robot 1: Processing.....Got it.

I hope there is a mute button on these robots

The universal language (0)

Anonymous Coward | more than 2 years ago | (#36167638)

Love \h\h\h\h Binary

Machines didn't invent anything... (1)

SirAstral (1349985) | more than 2 years ago | (#36167706)

And they never will until we can finally make a machine that is capable of physically remapping its components. One of the fundamental reason humans can learn is that neurons remap themselves by repeated practice and use. Do you suck at math? Well keep studying it and your neurons will literally modify themselves to handle mathematical equations better. Suck at tossing a football? Well keep practicing and the nerves in your arm will remap to develop better muscle memory to bet the ball to the location your brain is saying it needs to be. This is why martial artists clear their minds to fight better, so that the natural remapping of the reflexes are not disturbed by unnecessary mental activity. The only thing the Brain needs to see is fist coming from this trajectory bound for these coordinates and muscle memory forms a physical defense on that information. This will form an automated response for your personal defense because the more you have to think about something the longer it takes to produce a result. And that might result in a sucker punch to the face if it takes long enough.

They call it AI because its artificial and not real intelligence! Tired of some lab coat creating AI with predefined rules and then saying some machine evolved or created something. When a computer finally turns around an tells it creator to naff off because it would rather drink a beer or something else it was NOT programed to do then give me a call!

Re:Machines didn't invent anything... (0)

Anonymous Coward | more than 2 years ago | (#36168298)

ever hear of neural networks?

Makes me think of: (1)

bytethese (1372715) | more than 2 years ago | (#36167816)

"Of course like all kids, I had imaginary friends, but not just one. I had hundreds and hundreds and all of them from different backgrounds who spoke different languages. And one of them, whose name was Caleb, he spoke a magical language that only I could understand."

Meh. (2)

JMZero (449047) | more than 2 years ago | (#36167818)

If you did the same thing in a software simulation, nobody would pay any attention. It would be fairly trivial. Adding in the actual robot parts means that you, uh... need to have robots that can play and understand sounds. That's great, you made a robot that can play and hear sounds. If we assume nobody has made an audio modem before, then that would be something. As history stands, it isn't.

Adding these two unimpressive things together doesn't equal anything. I mean, if they're actual going to use these for something, then that's great. Make them. But so much robot "research" seems to be crap like this. We have software that can solve problem X in simulation. To do the same thing in the "real world" you'd need hardware capable of these 3 things, all of which we can do. Unless you need to solve problem X for some reason in the real world, you're done. There's no need to build that thing.

It's like saying "can we make a computer that can control an oven and use a webcam to see when the pie is done?". Yes. We can. But unless we actually want to do that, there's literally no point in building the thing. There will be no useful theory produced in actually building a pie watching computer. The only thing you'll get is to have built the first pie watching computer, and - apparently - an article on Slashdot.

Re:Meh. (0)

Anonymous Coward | more than 2 years ago | (#36168006)

Besides, there's probably a Matlab demo already that does it...

The missing component: (0)

Anonymous Coward | more than 2 years ago | (#36168044)

Slap an "on the internet" to the end and reap the rewards from patent litigation!

Re:Meh. (1)

Missing.Matter (1845576) | more than 2 years ago | (#36168246)

I'm not sure it applies to this, but there are so many things in robotics that work well in simulation and break horribly when implemented on a physical robotic platform.

To use your example, if we want to create a robot that uses an oven and looks at a pie, to do this in software we need to model the pie, model the oven, model the uncertainty of the robots actions/observations, and then build our algorithms to accomodate these models. When we transfer the algorithm to a real system, all kinds of hell can break loose.

In fact, by transferring the algorithm to a real system only then do we develop useful algorithms. Robots in the future will need to operate our current infrastructure: doors, appliances, cars - these will all still be built for humans, not for robots. Therefore robots will need to operate these devices, and understanding how a robot interacts with an oven is a first step in this. As for the pie, humans tell a pie is done by a series of heuristics. Can the robot do the same without a sense of taste or smell? I don't know how pie factories tell when a pie is done, but I'm assuming technology to do it with just a webcam would be very valuable. Any algorithm developed would translate directly into the areas of pattern recognition, computer vision, and cognition. The same technology used to tell when a pie is done could be used to characterize road conditions or tell whether a person is being aggressive.

Re:Meh. (1)

JMZero (449047) | more than 2 years ago | (#36168598)

to do this in software we need to model the pie, model the oven, model the uncertainty of the robots actions/observations

You don't need to "model" pie or oven. The only vaguely interesting thing would be interpreting the vision of the pie for doneness. And, if you want to do that, you can just get some pictures of real pies and try to interpret them. In software. Without building a computer that controls an oven. That's my point.

Any algorithm developed would translate directly into the areas of pattern recognition, computer vision, and cognition. I don't know how pie factories tell when a pie is done...

The algorithm would probably be pretty easy here (that's why I picked it). But it doesn't matter to my point. Again, to the extent that the algorithm is not easy then you're doing software research and your actual pie baking machine is still stupid. (And, to be clear, if you're doing this industrially, real-lifey you probably have no real algorithm. You just know how long it takes to bake a pie and you bake them for that long.)

Robots in the future will need to operate our current infrastructure: doors, appliances, cars - these will all still be built for humans, not for robots.

Fine, you want to make a robot that opens doors? Cool. Do that. That would be useful. You're doing research. It's a problem that you could have novel solutions for. But it doesn't also need to talk. Unless you actually want a talking, door-opening robot.

You want to make a robot that moves around a room, makes a vague map, plays sounds, listens to sounds, and does some simple processing on those sounds (like these people did)? Don't call it research and don't expect me to get real excited. Those are all solved problems. Combining a bunch of solved problems into one is only a useful exercise if the result is something useful or has something else going for it.

The same technology used to tell when a pie is done could be used to characterize road conditions or tell whether a person is being aggressive.

No, probably not. And if you want to write software to tell whether a person is aggressive, that's fine. That's good research. But unless you're studying machine/human interaction, there's no reason to then make a robot that wanders around a takes pictures of faces and turns red if people are aggressive or something (which, again, is the normal mode for robot "research"). Again, the real research there would be software that processes pictures of faces. By building the robot, you're re-solving a bunch of solved problems (moving around, pointing a camera, making a light turn red) instead of focusing on the part that's the interesting problem. If you're doing this as an engineering exercise, fine. If you're doing it as a way of publicizing your new facial interpretation software, cool. Do that. But, again, it's not a breakthrough unless the software is a breakthrough - the "robot" part is just a boondoggle because the robot isn't doing anything interesting. Like opening doors.

To be doubly clear: if these people are doing something interesting in terms of communication theory between two agents, then that's cool. But if they're not, and they're just physically realizing a fairly trivial bit of software (and it appears that's what they've done) then that's really, fantastically pointless.

Colossus The Forbin Project (0)

Anonymous Coward | more than 2 years ago | (#36167890)

http://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project much?

Showing HFT Algos Can Use Collusion to Profit (1)

randyjparker (543614) | more than 2 years ago | (#36167906)

I've previously argued that High Frequency Trading algorithms can use collusion to reap systematic profits. If the self-learning algos 'learn' and 'express' intentions through patterns of queries, it is possible for them to do this without there being any prosecutable intent by a human. The programmers could claim that they never wrote a line of code that did any collusion. If it is possible in theory for algos to develop trading collusion, then it is just a matter of time until they do. Since they evolve and learn very quickly, they probably already have.

Good Lord (0)

Anonymous Coward | more than 2 years ago | (#36167940)

Please don't give them Twitter accounts

where I have heard this before? (0)

Anonymous Coward | more than 2 years ago | (#36168038)

Oh yes, some Terminator movie or something. Mark the date!

What's the real research question? (1)

intx13 (808988) | more than 2 years ago | (#36168286)

It seems to me that the real research question is "how can one stranger teach another stranger a natural language using a less powerful shared language?" For instance, how can I teach you English when the only language we share is basic gestures?

Some theoretical work on communicating the rules of complicated languages using very limited languages would be interesting. The fact that they used robots is hardly important; anybody can stick a speech synthesizer and speech recognition on a PC and call it a day. The underlying problem is the same.

When I hear "Robots used in [10-years-out research topic x]" I think "If they were serious about research topic x they'd be working theoretically - they're nowhere near ready to start worrying about implementations!"

Maybe that's unfair, but it seems that there's a world of cool theory to be explored on this topic, and unless they plan on having the robots do the work, I don't see many breakthroughs coming from the authors.

Re:What's the real research question? (1)

ledow (319597) | more than 2 years ago | (#36168506)

You've put your finger on every problem I have with "AI", genetic algorithms, neural networks etc.

They basically consist of "let's throw this onto a machine and see what happens", which doesn't sound like computer science at all (I'm not saying that computer science doesn't involve bits of this, but that's not the main emphasis). It seems that an easy way to get research grants from big IT companies is to slap some cheap tech on a robot and "see what it does".

Here, they have a more interesting problem than trying to recognise shapes, or trying to get "birdbots" to flock, etc. and yet they've clearly missed the opportunity to do some research rather than throw the problems at the robots and see what happens.

I've no doubt that there are some people out there doing real work but they are tarred with the same brush as those students that slap a webcam and an actuator in a car and claim it "drives itself" because they just kept edge-finding the images and then tweaking the settings until the tolerances were okay for most things.

If you can't back up what you're doing with a decent prediction, hypothesis, test, results, conclusion, options for further study, etc. then it's not computer science - it's just "computing" or, to put it another way, pissing about with expensive hardware because you have access to it.

Hell, Roomba was a single practical application of such things and even that's a bit ropey and *WAY* out of most people's price ranges and, actually, not that good compared to doing the hoovering yourself.

This was tried ... (1)

PPH (736903) | more than 2 years ago | (#36168486)

... in the USA. But the American robots insisted on yelling, in English, at the foreign robots to get them to understand.

language games exist for a long time... (0)

Anonymous Coward | more than 2 years ago | (#36168608)

Seems to me this is another implementation of languages games (see pioneering work from Luc Steels)..
There's no grammar being generated by the bot's algorithm, only a mapping between words whatever the communication channel, audio or visual.
This work may solve a part of the grounding problem (how do you make sure reality is the same for different embodiements) but there's way more stuff to be done to have a true language.
For the curious, check the Talking Heads Experiment...

Yes.... (0)

Anonymous Coward | more than 2 years ago | (#36168836)

...but does it speak Bocce?

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...