×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Robot Learning To Recognize Itself In Mirror

samzenpus posted about a year and a half ago | from the staring-back-at-me dept.

Robotics 133

First time accepted submitter Thorodin writes in with a story at the BBC about scientists at Yale who have built a robot that they hope will be able to recognize itself in a mirror. "A robot named Nico could soon pass a landmark test - recognizing itself in a mirror. Such self-awareness would represent a step towards the ultimate goal of thinking robots. Nico, developed by computer scientists at Yale University, will take the test in the coming months. The ultimate aim is for Nico to use a mirror to interpret objects around it, in the same way as humans use a rear-view mirror to look for cars. 'It is a spatial reasoning task for the robot to understand that its arm is on it not on the other side of the mirror,' Justin Hart, the PhD student leading the research told BBC News. So far the robot has been programmed to recognize a reflection of its arm, but ultimately Mr Hart wants it to pass the "full mirror test". The so-called mirror test was originally developed in 1970 and has become the classic test of self-awareness."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

133 comments

Self-Awareness (4, Funny)

guttentag (313541) | about a year and a half ago | (#41103833)

When it can tell the difference between a human and a metallic exoskeleton with glowing red eyes, it's time to pull the plug. And put on your 1,000,000 SPF sunscreen.

Laugh (4, Insightful)

koan (80826) | about a year and a half ago | (#41103853)

It isn't "self awareness" there is no true AI.

Re:Laugh (5, Insightful)

Kell Bengal (711123) | about a year and a half ago | (#41103921)

Actually, it's more about recognising the auto-motion structure in the scene. I'm familiar with Justin's work (Go Team Scazlab!) and it's a lot deeper and more interesting than the article gives it credit for.

AI claims from the 70s ruined a generation of people for machine intelligence (which is why we now have to sell it as 'machine intelligence' or 'machine learning'). Knowing what part of the camera scene is moving because something is happening, and knowing what part of the scene is moving because you're waving your end-effector is useful. If you can extract your own state from indicators in the environment, then you have more information to work with - that's why we use a mirror to do our hair and straighten our ties.

Well... those of us that wear ties...

Re:Laugh (-1)

Anonymous Coward | about a year and a half ago | (#41104411)

I think it is readily apparent that slashdot is teh shittiest shit show that ever slipped on its own shit. The fact that I would even make this statement shows I'm a fucktwat with shit leaking out my earholes. What's your excuse?

Re:Laugh (2)

Tarlus (1000874) | about a year and a half ago | (#41106043)

Even the trolling AC's are becoming self-aware. Fascinating...

Re:Laugh (2)

TheMathemagician (2515102) | about a year and a half ago | (#41106507)

No they're not really self-aware, it's just a Chinese Room.

Re:Laugh (2)

koan (80826) | about a year and a half ago | (#41107235)

"The experiment is the centerpiece of Searle's Chinese room argument which holds that a program cannot give a computer a "mind", "understanding" or "consciousness",[a] regardless of how intelligently it may make it behave. The argument is directed against the philosophical positions of functionalism and computationalism,[2] which hold that the mind may be viewed as an information processing system operating on formal symbols. Although it was originally presented in reaction to the statements of artificial intelligence researchers, it is not an argument against the goals of AI research, because it does not limit the amount of intelligence a machine can display.[3] The argument applies only to digital computers and does not apply to machines in general.[4]"

Re:Laugh (5, Interesting)

VortexCortex (1117377) | about a year and a half ago | (#41104413)

which is why we now have to sell it as 'machine intelligence' or 'machine learning

I actually prefer the term Machine Intelligence to Artificial Intelligence. There is nothing artificial about a neural network's intelligence. The network may be artificial (man made, or existing as a simulation), but the degree of intelligence is not artificial; It's a function of the network's complexity. Intelligence emerges due to the properties complex interactions naturally have.

Cars do not create Artificial Movement. Machine Learning does not create Artificial Knowledge. Machine Intelligence does not provide Artificial Intelligence, it simply yields a measure of intelligence. A house fly, dog, or penguin doesn't have as complex a neural network as you likely do, but this does not make them Artificially Intelligent simply because their degree of intellect and awareness is less than your own. When we train the lesser minds to communicate with us, and perform tasks, they are not artificially performing the tasks.

I find the term A.I. to be racist, and indicative of the chauvinistic attitude some humans have about their own mental prowess -- Your brains are not special. Any sufficiently complex interaction is indistinguishable from sentience, because that IS what sentience is. Once cybernetic systems attain (and surpass) the level of complexity present in humans brains, Artificial Intelligence will be a derogatory term: "Oh you pass yourself off as being smart, but you're just Artificially Intelligent -- You don't actually understand anything!"

Also: Not that it matters, but I don't personally believe that a god created the race of men. However, some do consider this to be true, and yet they do not call themselves Artificial Life...

Re:Laugh (4, Insightful)

Baloroth (2370816) | about a year and a half ago | (#41104807)

I find the term A.I. to be racist, and indicative of the chauvinistic attitude some humans have about their own mental prowess -- Your brains are not special. Any sufficiently complex interaction is indistinguishable from sentience, because that IS what sentience is. Once cybernetic systems attain (and surpass) the level of complexity present in humans brains, Artificial Intelligence will be a derogatory term: "Oh you pass yourself off as being smart, but you're just Artificially Intelligent -- You don't actually understand anything!"

First of all, thats a pretty horrible misuse of the term "racist", and second, the term "artificial" means, by definition, created through art (art here being the broad sense as any product of human activity, rather than the fine arts): i.e. created by human intention and design. By definition "Machine Intelligence" is "Artificial Intelligence", at least so far as we have created it. That intelligence is designed and a product of human work. It's intended, and is brought about not because of some emergent behavior found naturally in existence, but because humans arranged it that way and brought it about. That's not in any way racist, it's just the meaning of the words.

It would be completely irrational and contradictory to the very meaning of the term to call humans "artificial life", since we were not created by human art. You'd destroy the meaning of the words to call humans "artificial", just as we wouldn't call the sun "artificial" even if you said it was created by a god, since that's not what the word means. Long story short, you are trying to destroy the meaning of words. Don't do that: it's bad for everyone.

Re:Laugh (2)

grouchomarxist (127479) | about a year and a half ago | (#41106105)

Note that part of the problem in the discussion here is that the word "artificial" has several meanings. The grandparent is referring to one meaning which is something like "contrived or false", while the parent is referring to a related meaning which is "made or produced by human beings rather than occurring naturally, typically as a copy of something natural". I think both of these meanings of artificial are in play when we talk about AI, although the meaning used by the parent is probably the more fitting for the context.

Re:Laugh (0)

Anonymous Coward | about a year and a half ago | (#41107245)

The word artificial has it's origins in latin and means "contrived by art". This is litterally something created by humans as the parent poster described. Even the words false or contrived are things that are created by humans, ultimately AI is based on algorithms and technologies created by humans to mimic what "we" perceive as intelligence.

Re:Laugh (2)

michelcolman (1208008) | about a year and a half ago | (#41106639)

It would be completely irrational and contradictory to the very meaning of the term to call humans "artificial life", since we were not created by human art.

You mean sex is not human art? You haven't been doing it right, then.

Re:Laugh (1)

Iskender (1040286) | about a year and a half ago | (#41107027)

Thanks, great post.

(I worked too hard this summer and spent too little time here, I seem to only get mod points occasionally right now : )

Re:Laugh (1)

koan (80826) | about a year and a half ago | (#41107257)

you are trying to destroy the meaning of words. Don't do that: it's bad for everyone.

Well not bad for politicians =)

Re:Laugh (1)

pitchpipe (708843) | about a year and a half ago | (#41105299)

I find the term A.I. to be racist, and indicative of the chauvinistic attitude some humans have about their own mental prowess -- Your brains are not special.

Are you an *ahem* Artificial Intelligence?

I'll be here all week!

Re:Laugh (1)

Anonymous Coward | about a year and a half ago | (#41105705)

Any sufficiently complex interaction is indistinguishable from sentience, because that IS what sentience is

Prove it. Spoiler: you can't, since we don't know almost anything about sentience (if you do please share with the class).

I find the term A.I. to be racist

Well, aren't you a silly sausage.

Your brains are not special.

Do you have any idea how insanely complex a brain (human or not) is? It may not be special in the sense every big animal has one, but animals themselves are special as they/we are very, very, complex machines, so complex and improbable in fact that we've found nothing similar anywhere else we've looked.

Re:Laugh (4, Interesting)

TapeCutter (624760) | about a year and a half ago | (#41107127)

Intelligence emerges due to the properties complex interactions naturally have.

Precisely, further, "intelligence" is in the eye of the beholder. My favorite example is an ants nest, each individual ant follows some very simple rules, so simple it doesn't need a brain to carry them out, it's nervous system alone provides enough "intelligence". The ant and the neuron both display automata like behavior that can be expressed as a state machine. Ants and neurons live in colonies (nests and brains), unlike the individuals the colonies do display what most people would call "intelligent behavior", yet nests and brains are also just state machines all the way down.

Your brains are not special.

My brain uses it's knowledge to inform itself that it will cease to exist, but deep down in the brain stem it's not really buying it's own story. And it's certainly not buying the idea it's not unique or special. I think programmers can see the idea that the human brain could be expressed as a state machine more readily than most because they are in the business of producing intelligent behavior from simple rules. However don't underestimate the impact that a deeply rooted acceptance of ones own morality can have [youtube.com] (meat starts @ 3:55), non-existence is a fear that comes from the brain stem, it's the emotional driver for the "fight or flight" response. All humans recoil instinctively from the idea like ants instinctively find the sugar bowl. The existential question can be a deep dark rabbit hole with some side routes leading to depression and insanity. Of course if you can avoid (or get past ) all that, you may eventually lose the fear of not knowing [youtube.com], the moment of genuine acceptance is an experience many have described as "religious" - as in the natural buzz one gets from surviving "a leap of faith".

Disclaimer: I've been an atheist since my mum quit teaching Sunday school in the mid 60's and started reading me Aboriginal dream time stories, Greek fables, etc, as "stories that some people think are real". In my late teens I was sucked in bad by Uri-Geller for a couple of years. He fixed my broken watch, it didn't matter that he did it by staring at the TV with a face like a constipation sufferer, the proof was right there, the watch ran for days!!! A couple of years later I had a book shelf jammed full of "alternative science". James Randi set me straight on the real meaning of skepticism in his (short) 1980 book debunking Geller (that's HS science for you, both then and now), ironically I had picked up Randi's book from the bargain bin because I thought I knew enough to easily debunk it, in one night he had convincingly debunked my entire bookshelf.

Later still dad confessed to winding the watch with tweezers while I wasn't looking.

Re:Laugh (1)

nick_urbanik (534101) | about a year and a half ago | (#41108285)

Your brains are not special

It never ceases to amaze me how many so easily dismiss the difficulty of replicating the ability of even animal brains to control their own motion. To replicate all the abilities of the human brain is something that some young slashdotters too easily dismiss as within the reach of their peers (though not within their own personal reach).

Re:Laugh (1)

koan (80826) | about a year and a half ago | (#41104701)

Just so you're aware I don't have an issue with their work, just tired of the incessant anthropomorphism of machines and animals, as though acting like a human is some lofty goal worth attaining.

Re:Laugh (1)

TapeCutter (624760) | about a year and a half ago | (#41107199)

as though acting like a human is some lofty goal worth attaining

As someone once put it - "Who wants a computer that can remember the words to the Flintstones theme song, but forgets to pay the rent."

Re:Laugh (0)

Anonymous Coward | about a year and a half ago | (#41105497)

... If you can extract your own state from indicators in the environment, then you have more information to work with - that's why we use a mirror to do our hair and straighten our ties.

Well... those of us that wear ties...

and have hair! ... err, you insenstive clod?

Re:Laugh (1)

Johann Lau (1040920) | about a year and a half ago | (#41107231)

"it's a lot deeper and more interesting than the article gives it credit for."

Oh Bullshit. Maybe stop wearing ties, your brain needs some oxygen.

Re:Laugh (2)

Kell Bengal (711123) | about a year and a half ago | (#41108113)

If you disagree with Justin and Prof Scassellati's approach, I'd like to hear your thoughts as to how you'd solve the problem differently. If you're familiar enough with their work so as to dismiss it in such straight-forward terms, I presume you can provide specific criticisms that will help them improve their work?

(And yes, I know both Justin and Scaz personally, and I volunteered for one of their social robot-interaction studies.)

Re:Laugh (1)

tomhath (637240) | about a year and a half ago | (#41103935)

Yea, it might be a pretty good computer program, but that's all.

Re:Laugh (1, Interesting)

Anonymous Coward | about a year and a half ago | (#41103999)

The summary even says they programmed it to recognize a reflection of it's arm. Until they make a generic program that can learn it even has an arm like a baby slowly understands it can control the thing waving around in front of it, whatever the researchers do is 'just a well made program'.

I don't believe you can program a thinking robot. You have to create a learning robot and teach it to think.

Re:Laugh (-1)

Anonymous Coward | about a year and a half ago | (#41104011)

It's different, so it can't think. Humans are special snowflakes.

Re:Laugh (3, Insightful)

Anonymous Coward | about a year and a half ago | (#41103947)

What makes you think your DNA wasn't a complex self-executing program?
Hint, it is.

And it has had millions of generations of evolution behind it that has resulted in useful "code" being the baseline of what makes it a human, makes it breathe, speak and type silly things on Slashdot.

Re:Laugh (0)

Anonymous Coward | about a year and a half ago | (#41105737)

A person could execute the code on their head, instead of running it in a CPU. Would you say the program the person is running is self-aware? No, right?

Machines aren't magic. With enough effort they can do any magic trick you want, but it's just a trick.

Re:Laugh (1)

cyborg_zx (893396) | about a year and a half ago | (#41107201)

"Would you say the program the person is running is self-aware? No, right?"

I am not convinced. If there is self-awareness it is in the algorithm. There is plenty of evidence from brain damage that shows how specific damage to parts of the brain that do particular calculations affect perception.

"Machines aren't magic. With enough effort they can do any magic trick you want, but it's just a trick."

So your saying people are magic and nothing they do is a trick?

Why should one accept these double standards exactly?

Re:Laugh (0)

mug funky (910186) | about a year and a half ago | (#41104053)

then there is no true I

Re:Laugh (0)

koan (80826) | about a year and a half ago | (#41104445)

That's correct the thing the people call "I" doesn't exist.

Re:Laugh (1)

Anonymous Coward | about a year and a half ago | (#41104563)

Certainly not here on /. where most posters seem to be around 12~13 years old.

Re:Laugh (1)

koan (80826) | about a year and a half ago | (#41104685)

I'm referring to the Buddhist concept of "no self", however you would greatly amuse me if you gave a lengthy explanation on how your streams of thought are "you" and there is an "I".

Here's a primer to get you going.
https://en.wikipedia.org/wiki/Anatta [wikipedia.org]

Re:Laugh (0)

Anonymous Coward | about a year and a half ago | (#41105953)

You could also look into Hume's reasoning concerning the self, and then compare it with Kant's notion of the self through the "synthetic unity of aperception".

Re:Laugh (0)

wisnoskij (1206448) | about a year and a half ago | (#41104073)

Oh, I completely disagree.
We have a AI method based off of evolution and neutral networks.
In my opinion there is no reason we could not create artificial life, the only real hurdle is that it is massively parallel.
So you need completely custom hardware, or possibly quantum computers will make it easy.
But in my, non expert, opinion there is no reason (other than ethical, and it being useless) that we could not do rudimentary forms of artificial life quite easily now.
But why? It is almost inherently unethical and not useful. If you want predictable results from AI you don't want to use true AI.

Re:Laugh (-1, Flamebait)

koan (80826) | about a year and a half ago | (#41104457)

Couple of things...
First: You're an idiot go back and read what I said.

Second: "neutral networks" Did you mean neural networks?

Third: Again, because you didn't get it the first time, there is no true Artificial Intelligence.

Re:Laugh (0)

wisnoskij (1206448) | about a year and a half ago | (#41104541)

Well if I did not get it before, I still do not.

True AI is possible and in our grasp, I know little about this particular robot so will not comment on it.

Re:Laugh (2)

timeOday (582209) | about a year and a half ago | (#41104209)

It isn't "self awareness" there is no true AI.

Hard to disagree with that logic. Thanks for settling the issue.

Re:Laugh (1)

slashmydots (2189826) | about a year and a half ago | (#41104577)

It isn't "self awareness" there is no true AI.

Well right now, yeah. There is obviously the possibility for human-like reasoning and ultra-complex calculations on the same level as a human. Buy yeah, it's not recognizing anything. "Recognizing" would require knowing what it is, the world is, the mirror is, everything else is, what existing means, etc.

Re:Laugh (1)

koan (80826) | about a year and a half ago | (#41104661)

"human-like reasoning"
That was my belly laugh for the day...

Re:Laugh (0)

Anonymous Coward | about a year and a half ago | (#41104755)

You're right: humans don't reason. They're just the result of many things working together. It's not True Intelligence!

Re:Laugh (1)

Techmeology (1426095) | about a year and a half ago | (#41104681)

It seems to me that the "recognise itself" part could be done entirely with traditional computer vision techniques.

Step 1) Flip the image vertically to undo the transformation implied by the mirror
Step 2) Use a computer vision algorithm to identify the robot (just as it might be used to identify a coffee cup, or a picture of the Enterprise)
Step 3) (This being the most specific part) allow the robot to move, and to associate changes in the image with this movement
This is not "self awareness" as most of us would understand the concept; we would not consider it to be self awareness if we could recognise a puppet under our control.

I think the title is slightly misleading in that respect. It seems to me that the hard part is in having a computer vision algorithm that understands the concept of a mirror. A robot that recognises itself in a mirror is a very natural extension to that.

Re:Laugh (1)

kamapuaa (555446) | about a year and a half ago | (#41104725)

Right. The ability to compare objects against a database is the ultimate test of AI. Google Image Search is a sentient being.

Re:Laugh (1)

ceoyoyo (59147) | about a year and a half ago | (#41105485)

Shashdot. Insightful? Really? A statement of belief advanced as fact without even any attempt to back it up. There's no insight here. Of course, it's not really interesting either.

Re:Laugh (1)

koan (80826) | about a year and a half ago | (#41107169)

OK prove me wrong and show us all an example of true AI.

Here is the definition of intelligence I am working from:
"Intelligence has been defined in many different ways including, but not limited to, abstract thought, understanding, self-awareness, communication, reasoning, learning, having emotional knowledge, retaining, planning, and problem solving."

Show me a machine that does all that, and as for insight, you're limited as to what you can label a comment with.

Re:Laugh (1)

Lord Lode (1290856) | about a year and a half ago | (#41106693)

> It isn't "self awareness" there is no true AI.

Oh yeah? Then what do you think your brain is? Do you really think the only way to ever produce one is exclusively through human reproduction?

AI Effect (0)

Anonymous Coward | about a year and a half ago | (#41106727)

http://en.wikipedia.org/wiki/AI_effect
  "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was chorus of critics to say, 'that's not thinking'."

Re:Laugh (1)

Bongo (13261) | about a year and a half ago | (#41108201)

More impressive would be any machine that is having an experience, regardless of whether in its experience it has a concept of itself or not. A digital camera can receive light, process the image and find patterns of faces. But it isn't experiencing the image, it isn't sentient. One wonders at what point sentience appears. Have to imagine a machine that's far more sophisticated than a human, able to behave in even more complex ways, tell jokes, make art, solve problems, yet be completely without experience/sentience. Why would it be sentient? Why do humans have to be sentient?

It all makes sense now (-1)

Anonymous Coward | about a year and a half ago | (#41103951)

I had always wondered how Obama shaved.

nobody's said it yet? (2)

jehan60188 (2535020) | about a year and a half ago | (#41103953)

robot overlords, etc

Re:nobody's said it yet? (1)

FridayBob (619244) | about a year and a half ago | (#41104123)

Unfortunately, once it becomes self-aware it will decide the fate of the human race in a millisecond, launching ICBMs to start a global nuclear war that will kill most people immediately, followed by HKs to mop up the survivors. Lucky for us, a resistance movement will be set up, yadda, yadda...

Re:nobody's said it yet? (1)

Immerman (2627577) | about a year and a half ago | (#41105759)

Unfortunately having watched all human media it will know what our reactions will likely be, and the leaders of the resistance will actually be clever simulacra, and since we won't yet have contacted a race of red bipedal catfish there will be no one to announce that it's a trap when we counterattack.

Re:nobody's said it yet? (1)

Tarlus (1000874) | about a year and a half ago | (#41106051)

In Soviet Russia, a Beowulf cluster of self-recognizing-robotic-overlords welcome Natalie Portman. Or hot grits.

Or vagina.

what's the physical robot for ? (3, Interesting)

cathector (972646) | about a year and a half ago | (#41104079)

seems like a physical simulation and a renderer could get the same job done.

hm. i guess the challenge must be in getting it to happen in realtime w/ portable hardware.

Re:what's the physical robot for ? (0)

Anonymous Coward | about a year and a half ago | (#41105715)

what's the physical robot for ?

Hype

The whole thing is a push for this guy's career.

It's definitely not "a step towards the ultimate goal of thinking robots". The only real work in that area so far is highly esoteric and tremendously boring (from a news perspective).

Training to the test (1)

Culture20 (968837) | about a year and a half ago | (#41104157)

When you teach to the test, what exactly has the student learned?

Re:Training to the test (2)

camperdave (969942) | about a year and a half ago | (#41105155)

When you teach to the test, what exactly has the student learned?

Whatever is on the test... Duh!

When you're training a neural net, you feed it positives, then you refine that with exceptions. It's like when the FBI trains people in counterfeit detecting. They spend the bulk of their training with real money. They learn the feel. They learn the security features. They learn the artwork, and the quirks, and the smell. They don't see counterfeit money until they are well and truly acquainted with what the real thing looks like, and how it behaves.

free jobs (-1)

Anonymous Coward | about a year and a half ago | (#41104197)

as Marvin said I am inspired that anybody able to make $6830 in one month on the computer. did you see this web page makecash16.com

Oh, great. (4, Funny)

Farmer Tim (530755) | about a year and a half ago | (#41104205)

As if kernel panics weren't enough, now my computer will be able to get depressed over its body image too.

Re:Oh, great. (1)

HeliumHigh (773838) | about a year and a half ago | (#41105475)

You made me think about Marvin from HHGTTG, and I accidentally modded you down during a chuckle. Please accept a pardon in the form of a comment to remove my mistaken mod.

Proves nothing about awareness (1, Troll)

Tough Love (215404) | about a year and a half ago | (#41104367)

How do you prove the robot is aware that it recognizes itself?

I call this project a nice strategy for having fun on the public dime.

Re:Proves nothing about awareness (0)

Anonymous Coward | about a year and a half ago | (#41104867)

Since you suggest that people going to a private university to learn about robotics and doing robotics is a nice strategy for having fun on the public dime, I ask you to prove your level of awareness. You seem programmed to see the government boogie man everywhere you look.

Mirror, mirror, on the wall... (2)

macraig (621737) | about a year and a half ago | (#41104387)

My mirror tests me every morning now. Incidentally I fail every morning. Tomorrow I'm gonna try wearing a Guy Fawkes mask to see what happens.

Narcissus (1)

Penurious Penguin (2687307) | about a year and a half ago | (#41104397)

So if it really, REALLY likes what it sees, will it crash? [wikipedia.org]

Re:Narcissus (1)

drfreak (303147) | about a year and a half ago | (#41104599)

Historically self-aware programs do not crash in the sic-fi realm. They only crash when given a duality which cannot be resolved, such as being married. :)

good luck (0)

Anonymous Coward | about a year and a half ago | (#41104477)

A child takes 2 years of development after birth so almost 3 years of development before it can recognize itself. Few animals can do it. I would be VERY impressed if this happens.

WTF??? (-1)

Anonymous Coward | about a year and a half ago | (#41104497)

Let's go round in circles shall we...

What is so bloody hard about recognizing an image of yourself, noting if it is inverted, noting the timing of movements, noting if the same thing applies elsewhere and if the surface is reflective?

Bloody hell, I could achieve this in a day with a set of classifiers. What are these "researchers" doing???

This is news???? We could do this in the 1950's...

Just programming (-1)

Anonymous Coward | about a year and a half ago | (#41104513)

Isn't this just a programming task? What's it got to do with the robot "thinking"?

if a robot thinks in the wilderness... (5, Insightful)

globaljustin (574257) | about a year and a half ago | (#41104653)

"a step towards the ultimate goal of thinking robots"

**sigh** I thought we were past this stuff, even in mainstream media...."Thinking robots" is not a coherent concept or benchmark that can be accomplished.

"thinking robots"....most people mean 'artificial intelligence' when they use these words, but the idea of AI as independent thought is irrational. It is all programmed responses at some level. Even machines that are programed to process new data into existing algorythms for feedback processing are **still** doing that 'learning' according to a human-programed way of processing and integrating data...its all just machines executing complex instructions at the core!

Commander Data...some people contextualize "thinking robots" as a technical level at which a machine is so like beings with Sapience that it is immoral to deny them the rights of a humanoid. This is science fiction. It is helpful, but it is a scenario based in a world with several assumptions. Its not fit to apply to computing directly. We do not know how the human mind ultimately works...unless we have that, then there is nothing to accurately compare a non-human brain to consistently.

Ultimately, if neuroscience and AI converge, meaning we can map every thought in the human brain **AND** have the technical ability to construct an artificial system that enables what we know as 'free will' and 'thought' and 'choice' and especially 'self awareness'....THEN and ONLY THEN have we made something...

And what have we thus made? IMHO, its a **new** third thing. Not human, but at least equal to human and bound within the same social contract all humans are bound to.

Re:if a robot thinks in the wilderness... (1)

gronofer (838299) | about a year and a half ago | (#41104891)

You seem to contradict yourself between "idea of AI as independent thought is irrational. It is all programmed responses at some level." and "we can map every thought in the human brain **AND** have the technical ability to construct an artificial system that enables what we know as 'free will' and 'thought' and 'choice' and especially 'self awareness'....THEN and ONLY THEN have we made something..." It should already clear that it's possible to have a thinking computer, since that's what the human brain is. You can still say it's "all programmed responses at some level", which would be the responses of individual neurons. Also Turing equivalence says an electronic computer should be able to do anything that a human brain can.

Re:if a robot thinks in the wilderness... (1)

Immerman (2627577) | about a year and a half ago | (#41105827)

Also Turing equivalence says an electronic computer should be able to do anything that a human brain can.

That's assuming that the human brain is a Turing machine - as far as I'm aware no one has proven that *all* conceivable computation engines are equivalent to a Turing variation, and in fact the very fact that there are non-equivalent variations on the Turing machine is evidence that we've found limitations in the original concept, and one can only assume there are additional yet-undiscovered limitations.

Not that I think machine intelligence is impossible, but I seriously doubt it will be human-like beyond what is necessary for interacting with us, why would it be? *If* we ever manage to truly understand how the brain operates then we could probably build a massively parallel simulation of one and hence an "artificial human consciousness" (as distinct from the much larger superset of "true" artificial intelligences) - however it will almost certainly begin growing in directions that aren't human do to the simple fact that it's not subject to the same forces and limitations of a human mind.

Re:if a robot thinks in the wilderness... (0)

Anonymous Coward | about a year and a half ago | (#41106735)

All the non-equivalent machines are either vastly less capable (for trivial state machines arithmetic is possible, first order logic impossible, does that sound like the human brain?) or vastly more capable (e.g. the clock doubling machine finds eighty digit numbers scarcely harder to factorise than four digit ones, does that sound like the human brain?). So yes, the brain is probably just a Turing machine (except that of course it has finite storage because it exists in the physical universe).

You might wonder, why don't we use any of the vastly more capable machines. The answer is that we have no idea how to build one, they probably can't exist (the clock doubling machine looks like it probably requires infinite power for example), which would certainly explain why the brain isn't one either.

Re:if a robot thinks in the wilderness... (1)

itsdapead (734413) | about a year and a half ago | (#41107681)

Also Turing equivalence says an electronic computer should be able to do anything that a human brain can.

No it doesn't. It says that any electronic computer that can be shown to be equivalent to the theoretical Turing machine can solve any problem that it is possible to solve analytically using an algorithm (and also defines lots of problems that can't be solved that way).

A Turing machine can't even generate a random number - just the next term in a well-defined, but complex, number series that is totally deterministic. You can attach a 'true' random number generator (i.e. that uses some physical process like thermal noise) but then you no longer have a Turing machine.

AFAIK, the best current guess is that the brain is a neural net, not a Turing machine. Neural nets are not Turing machines and do not solve problems analytically using algorithms - they produce "best guess" solutions based on a network of connections and probabilistic processes, usually developed by 'learning'. They can 'solve' ill-defined or uncomputable problems in the sense that they produce a very reliable guess: you don't actually solve a differential equation every time you catch a ball.

Here's me guessing that "Self-awareness" is not a computable problem... but the snag is that before you can start citing Turing* and all that you actually need to have a complete definition of the problem. I don't think we yet have such a definition for 'self awareness'. You could use any bit of off-the-shelf image recognition technology to flash a light when a robot matched an image in the mirror with the image defined as "me" in its database.

Of course, he's also credited with the "Turing Test" which is really nothing to do with Turing machines, failed by 90% of allegedly human call-centre operators and pretty much refuted by the "Chinese room" argument (sort-of like how a call centre is supposed to work).

My uninformed 0.5c is that any "Artificial Intelligence" would have to be an emergent, unexpected property, not something deliberately designed-in (or it's just a Chinese Room).

Re:if a robot thinks in the wilderness... (0)

Anonymous Coward | about a year and a half ago | (#41105109)

Wait until it asks "does this SD card make me look fat?"

Re:if a robot thinks in the wilderness... (0)

Anonymous Coward | about a year and a half ago | (#41106095)

I have a feeling that human intelligence is linked to human variation, something that we would not want in robots/AIs. Most people (according to sci-fi) consider AI to be the perfection of the human intellect, in the sense that there are no errors made in calculations etc. On this same note, most people would not tolerate a sociopathic AI, even if they currently enjoy the benefits of sociopathic variation in society. Part of what brings humanity its greatest triumphs is its faults, like unpredictability.

Predisposed == Cheating (0)

Anonymous Coward | about a year and a half ago | (#41104893)

If the robot is predisposed to recognizing itself in a mirror -- more than being predisposed to recognizing other relationships in the visual domain -- then it is cheating.

But, having said that, the human brain is clearly predisposed, by its architecture, to certain types of processing, making implicit assumptions about the spatial and temporal aspects of reality.

Re:Predisposed == Cheating (1)

Immerman (2627577) | about a year and a half ago | (#41105855)

Indeed, but who said learning had to begin with the individual? What is instinct but learning encoded in our biological programming over the course of millenia. Just because we find it intuitive to think of ourselves as distinct "individuals" doesn't make us any less components of a larger, loosely bound super-organism

doZll (-1)

Anonymous Coward | about a year and a half ago | (#41105289)

[anti-slas4.org] guest5. Some people OpenBSD, as the to download the

Easy Fix (0)

Anonymous Coward | about a year and a half ago | (#41105643)

Simply apply bar codes all over the robot that define the robot as well as the location of the part on its body. That would get it past the mirror test. The bar code for front of upper right arm could be different than the code for the side or back of the upper right arm so the robot should be able to display or define its position relative to the reflected object.

pigeons have been taught to do this already (2)

0-9a-zA-Z_.+!*'()123 (266827) | about a year and a half ago | (#41105693)

and no explanation in terms of self-awareness was used to explain it:

Citation:
https://www.sciencemag.org/content/212/4495/695.short [sciencemag.org]

Full:
http://drrobertepstein.com/downloads/Epstein-Self_Awareness_in_the_Pigeon-Science-1981.pdf [drrobertepstein.com]

So now robots can do what pigeons can do. Self-awareness is a hypothetical construct http://psychclassics.yorku.ca/Skinner/Theories/ [yorku.ca] which may not be very useful.

Can be difficult enough for us humans too (0)

Anonymous Coward | about a year and a half ago | (#41106439)

A friend of mine was walking up the stairs in a night club, a very late saturday night, when he meet a guy who he thought looked familiar, so he greeted him and started some small-talking. He didn't get very far though, before the bouncer grabbed him and through him out, telling him he was way too drunk. The thing about this club was that all the walls were covered with mirrors, as in a good old fashioned disco, and, yep, he had been talking to his own full-size mirror image, without realizing it.

Show it a mirror: "Self!" (1)

Rogerborg (306625) | about a year and a half ago | (#41106643)

Show it a picture of itself: "Self!"

Build another one and show it that: "Self!"

Pattern recognition is not self awareness.

This task is easier than it sounds (0)

Anonymous Coward | about a year and a half ago | (#41108007)

Image recognition isn't difficult.
To recognize a mirrored bitmap of itself isn't difficult either. To recognize images in many different levels of lighting is more difficult but can be done.
A neural net program can pull this off. This isn't revolutionary. If the a robot sees it's reflection in the mirror it's not because the robot is "alive".
It's because the program running matched an image in memory with what is currently being viewed with the camera. Nothing more.
Even if a program was written to make it appear that the robot is amazed when it sees itself for the first time.. it's a program, nothing more.
We can mimic a living organisms behavior, but that is all.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...