Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

An AI 4-Year-Old In Second Life

kdawson posted more than 6 years ago | from the currents-and-eddies dept.

Programming 234

schliz notes a development out of Rensselaer Polytechnic Institute where researchers have successfully created an artificially intelligent four-year-old capable of reasoning about his beliefs to draw conclusions in a manner that matches human children his age. The technology, which runs on the institute's supercomputing clusters, will be put to use in immersive training and education scenarios. Researchers envision futuristic applications like those seen in Star Trek's holodeck."

cancel ×

234 comments

Sorry! There are no comments related to the filter you selected.

The potential for hilarity is nigh infinite (5, Funny)

elrous0 (869638) | more than 6 years ago | (#22750324)

A fake four-year-old boy running around with a bunch of sadomasochists, furries, "child play" *ahem* "enthusiasts." The making of a brilliant sitcom if I've ever seen one.

In this episode, Eddie's AI gets put to the ultimate Turing Test when he's approached by a Gorean pedophile! Tune in for the laughs as Eddie responds with "I'm sorry, I don't understand the phrase 'touch my weewee, slave!'"

Does this mean (0)

Anonymous Coward | more than 6 years ago | (#22750704)

we get to sink Haley Joel Osmond is the bottom of New York Harbor soon?? What a sweet day that will be!!

Re:Does this mean (1)

Mister Whirly (964219) | more than 6 years ago | (#22750940)

"we get to sink Haley Joel Osmond "

Except he's like, 45 now.

Re:Does this mean (0)

Anonymous Coward | more than 6 years ago | (#22751090)

And changes things how? SINK HIM!!!!

Re:The potential for hilarity is nigh infinite (2, Funny)

mmortal03 (607958) | more than 6 years ago | (#22751320)

Well, based on that, I guess they are going to have to create an AI version of Chris Hansen there as well.

So how long... (1)

police inkblotter (1228830) | more than 6 years ago | (#22751348)

until SL'ers start trying to have sex with it, buy it 'toys,' etc?

Poor little guy (5, Funny)

BadAnalogyGuy (945258) | more than 6 years ago | (#22750326)

Imagine if you were born and raised by furries who attached enormous genitals to their bodies and watched simulated porn all day long.

The poor kid never had a chance.

Re:Poor little guy (1)

jswigart (1004637) | more than 6 years ago | (#22750934)

The AI kid will learn to hate people. Witness the evolution of Little Jimmy Skynet We're all doomed once he learns how absolutely pathetic people are in online social situations.

Segmentation faults are murder! (4, Interesting)

Tatarize (682683) | more than 6 years ago | (#22751316)

Even one messed up pointer could cause this child to die!

Segmentation faults are murder!

Honestly I wonder about the moral oddities of AI.

duplicate! (4, Informative)

ProfBooty (172603) | more than 6 years ago | (#22750336)

Re:duplicate! (-1, Offtopic)

sm62704 (957197) | more than 6 years ago | (#22750510)

Ok, yesterday (is this the IBM "Rascals" computer again?) I linked the Uncyclopedia article "AI" [uncyclopedia.org] , which is of course about artificial intelligence, copy/pasted the text into a comment [slashdot.org] and was modded offtopic.

So if I do it again here will I be modded redundant?

One AC flamed me for linking "the unfunny" Uncyclopedia and claimed I must be 13. Now THAT was funny!

-mcgrew

disclaimer: I'm not even a registered user of uncyclopedia, but I think it's funny as hell. If you're the kind of tight assed anal prude that flames one for referring to a man from China as a "Chinaman" perhaps you should stay away from uncyclopedia. And my comments. And especially my journals.

Re:duplicate! (-1, Troll)

Anonymous Coward | more than 6 years ago | (#22750858)

If i had mod points i would mod you redundant flamer. You must be 13 posting that shit from uncyclopidiota

Re:duplicate! (-1, Redundant)

Anonymous Coward | more than 6 years ago | (#22750998)

mod parent up

Re:duplicate! (2, Funny)

mblase (200735) | more than 6 years ago | (#22750698)

No, that was an article about the backup copy.

Re:duplicate! (0)

Anonymous Coward | more than 6 years ago | (#22750892)

kdawson's stupid. The original article is still on the main page and now we have this fucking dupe.

Re:duplicate! (1)

jefu (53450) | more than 6 years ago | (#22751180)

Perhaps /. should try using this software as an automatic editor. The CNITTER (Cowboy Neal Intelligent Turing Test Editor Replacement). The C should be silent, so it would sound like the slashdot knitter.

Furries? (-1, Redundant)

Anonymous Coward | more than 6 years ago | (#22750340)

Are they going to use the research to attempt to find what draws a person to being a Furry to give parents warning signs for this terrible disease? Or are they just trying to see how fucked up someone could be from growing up in the cease pool that is second life?

Don't tell 4chan (-1, Flamebait)

Anonymous Coward | more than 6 years ago | (#22750342)

They probably can't wait to defile a 4-year-old avatar.

I for one (0)

Anonymous Coward | more than 6 years ago | (#22750346)

I, for one, welcome our new 4 year old AI overlords

Re:I for one (5, Funny)

sm62704 (957197) | more than 6 years ago | (#22750694)

Imagine a beowolf cluster of four year olds!

What's that, you teach preschool? (shudder)

Oh, great ... (1)

Ihlosi (895663) | more than 6 years ago | (#22750352)

An AI that's prone to throwing tantrums. Just what we needed.

Re:Oh, great ... (0)

Anonymous Coward | more than 6 years ago | (#22750494)

Steve Balmer, meet your replacement.

obvious I know (5, Funny)

ombwiri (1203136) | more than 6 years ago | (#22750356)

But if you are letting you AI out into Second Life and comparing it to intelligence there, surely you are setting the bar rather low?

Re:obvious I know (1)

qoncept (599709) | more than 6 years ago | (#22750446)

But if you are letting you AI out into Second Life and comparing it to intelligence there, surely you are setting the bar rather low?

They were also trying to prove the AI they created was "cool." So you'll soon be seeing it on slashdot for testing.

Re:obvious I know (1)

TobyRush (957946) | more than 6 years ago | (#22750948)

So you'll soon be seeing it on slashdot for testing.
I'm an AI construct, you insensitive clod!

Re:obvious I know (5, Funny)

tcopeland (32225) | more than 6 years ago | (#22750664)

> But if you are letting you AI out into Second Life and
> comparing it to intelligence there, surely you are setting the bar rather low?

Time for an "Office" quote:

Dwight: Second Life is not a game. It is a multi-user, virtual environment. It doesn't have points or scores, it doesn't have winners or losers.
Jim: Oh it has losers.

First Test... (4, Funny)

martyb (196687) | more than 6 years ago | (#22750364)

First test: could a 4-year-old rascal recognize a dupe [slashdot.org] ?

Holy crap (0)

Anonymous Coward | more than 6 years ago | (#22750380)

So that's what the experiment I signed up to was for!

Navigation experiment conducted in a virtual environment lab located in RPI's Social & Behavioral Research Labs (Winslow Building Room 3110). Experiment involves moderate physical activity. Bring athletic shoes. IMPORTANT: This experiment is in the Winslow Building, *not* the Carnegie Building. Please leave a few extra minutes to get to Winslow. DIRECTIONS TO WINSLOW FROM CARNEGIE: Follow the stairs that begin between Carnegie and Walker, past Pittsburgh, down to 8th Street. Winslow is the brick building directly across the street. Enter from the north side near the parking lot. Take the stairs or elevator to the 3rd floor and turn right to Room 3110.

Not even close (5, Interesting)

dreamchaser (49529) | more than 6 years ago | (#22750386)

It's a simulation of a 4 year old and is NOT an AI with the cognitive abilities of a mouse let alone a 4 year old human. It's just a very powerful chatbot writ large. Sensationalism strikes again!

Re:Not even close (5, Funny)

Uzuri (906298) | more than 6 years ago | (#22750456)

That's a relief...

Because the thought of a holodeck full of 4-year-olds has to be the definition of Hell.

Re:Not even close (0)

Anonymous Coward | more than 6 years ago | (#22751256)

Unless you are a pedophile. I am pretty sure the couple of times in TNG that people spent too much time in the holodeck, it was because they were involved in some type of "fantasy" with a holodeck character (Riker and Minuete, Barkley and Deanna). The mental image when I read the summery was quite disturbing.

Re:Not even close (5, Insightful)

Mr. Slippery (47854) | more than 6 years ago | (#22750466)

It's just a very powerful chatbot writ large.

Any sufficiently advanced chatbot is indistinguishable from an intelligent being.

(Not to say this is in any way a sufficiently advanced chatbot.)

Re:Not even close (3, Insightful)

TapeCutter (624760) | more than 6 years ago | (#22750692)

Indeed, that's the whole point of a Turing test.

Re:Not even close (1)

Ed Avis (5917) | more than 6 years ago | (#22750750)

Of course that's not true, because a chatbot cannot eat cornflakes, ride a horse or have children.

Re:Not even close (4, Insightful)

Mr. Slippery (47854) | more than 6 years ago | (#22750886)

a chatbot cannot eat cornflakes, ride a horse or have children.

And why is eatting cornflakes, riding a horse, or having children necessary to be considered an intelligent being? The guy who wrote The Diving Bell and the Butterfly [wikipedia.org] couldn't do any of those things.

Re:Not even close (1)

Ed Avis (5917) | more than 6 years ago | (#22751326)

They are not at all necessary but they are some ways in which you can distinguish a human being from a chatbot.

To be fair, the grandparent post probably meant 'if all you can do is interact by typing text into a computer and reading replies, then a sufficiently advanced chatbot is indistinguishable from a human'. Then again, given that definition, I can easily write a chatbot that is indistinguishable from an illiterate human.

My point is that there is more to social interaction than exchanging strings of text.

Re:Not even close (1)

JCSoRocks (1142053) | more than 6 years ago | (#22750968)

Yeah, isn't that how Valley Girls work?

Re:Not even close (0)

Anonymous Coward | more than 6 years ago | (#22751130)

The "sufficiently advanced" chatbot you posit isn't possible, though. We can get more advanced chatbots, but they'll always be distinguishable from humans. And the reason is because of the humans themselves.

If only humans could be counted upon to obey the rules -- but they don't. Humans cheat, humans lie, and humans break the rules! They can absolutely be counted upon to jump out from behind the curtain, grinning, to tell you they're human and you're some kind of moron for not guessing that right away. It's inevitable, and only a matter of time. That's but one silly example, and one could write an AI that mimics cheating in various ways, but the point is, humans can break the rules in many ways that an AI simply cannot. Breaking the confines of the test is part of the test.

The AI cannot do beyond what's been programmed -- to it, there's really no such thing as cheating, only additional cases that allow for exceptions to the general rule. It's still firmly bound by whatever the actual programmed rules are, however sophisticated they might be. It's stuck in the test. Whereas a human can and must find a way to end the test -- humans have lives to live, and aren't willing to be stuck in a box their whole lives. So the human will always find a way out, and when the "something" behind the curtain doesn't do so for a sufficiently long period of time, it's obviously not a human, and by process of elimination an AI. Any long-term observed obedience to the established rules is one way to positively id an AI.

Re:Not even close (4, Funny)

eln (21727) | more than 6 years ago | (#22751288)

Do you feel strongly about Any sufficiently advanced chatbot is indistinguishable from an intelligent being?

Re:Not even close (5, Funny)

Marty200 (170963) | more than 6 years ago | (#22750490)

Ever spend any time with a 4 year old? They are all little running chatbots.

MG

Re:Not even close (1)

hjf (703092) | more than 6 years ago | (#22750684)

only they chat by themselves (not needing your input to keep conversation, they just keep talking if you keep giving them attention). and sometimes (most times) they say this total nonsense... just like chatbots. the difference is that they're cute and make you laugh. oh and the main difference is that they're total chick magnets! I mean, if a chick sees you talking to a 4-year-old she will think awww!. but if she knows you talk to Eliza, she'll think ewww! (just don't wear a ring and make sure the kid calls you uncle every now and them... also make sure you're really his/her uncle, you perverted fuck).

Re:Not even close (5, Insightful)

Jason Levine (196982) | more than 6 years ago | (#22750900)

If my 4 year old is any indication, that "total nonsense" is stuff pulled from various sources they've come in contact with. They just don't realize at that age that everyone doesn't know the source just because *they* know the source. (In a similar vein, they don't realize that they can't point to something and refer to it while talking to someone on the phone.)

I can catch about 85%-90% of the references because I've seen the TV shows my son watches. I know when he talking about pressing a button on his remote control or knocking first before going in a door, he's talking about The Upside Down Show (sometimes a specific episode). I know that talk about Pete using the balloon to go up refers to a Mickey Mouse Clubhouse episode where Pete used the glove balloon to rescue Mickey Mouse. Going "superfast" refers to Little Einsteins. They might be mixed up too. Pete might use the remote control and push the "superfast" button.

To an outside observer, though, who doesn't get the references, it's all gibberish, but there actually is a lot of intelligence behind all that chatter.

Re:Not even close (1)

BlackSnake112 (912158) | more than 6 years ago | (#22751178)

Only part time chick magnets. After a while she will want the kid to not be around all the time. So says three girls I know who are dating guys with kids. Being a single father impressed them at first. They just would like the kids to be left with someone for a weekend or two a month.

Re:Not even close (1)

insertwackynamehere (891357) | more than 6 years ago | (#22750528)

So it's the dancing baby of the 21st century?

AI IOU (1)

Cuppa 'Joe' Black (1000483) | more than 6 years ago | (#22750388)

The search for artificial intelligence seems to me an odd use of actual intelligence.

Re:AI IOU (1)

superwiz (655733) | more than 6 years ago | (#22750444)

Even if it can lead to better-than-actual intelligence?

Re:AI IOU (1)

Trent Hawkins (1093109) | more than 6 years ago | (#22750746)

The search for artificial intelligence seems to me an odd use of actual intelligence.
But think how much easier gold farming will be!

The next step in ai (0)

Anonymous Coward | more than 6 years ago | (#22750390)

is to master the art of human stupidity. Let the impulsive moronic decisions begin.

oh yea (0, Offtopic)

play with my balls (1253180) | more than 6 years ago | (#22750394)

I'd let an AI 4-year old nestle it's balls in my mouth.

Re:oh yea (-1, Troll)

Anonymous Coward | more than 6 years ago | (#22750926)

so you dream of being a child molester eh? hidding behind a fake account on slashdot won't protect you forever. Baby rapers must die.

the next step in ai (1)

OrochimaruVoldemort (1248060) | more than 6 years ago | (#22750400)

master the art of human stupidity. Let the impulsive moronic decisions begin.

Re:the next step in ai (1)

sm62704 (957197) | more than 6 years ago | (#22750908)

master the art of human stupidity.

I did that in 1984 with a program called "Artificial Insanity" on a Timex-Sinclair 1000 with 16k of memory and no hard drive (later ported to the Apple II and then to MS-DOS). Its pre-beta name (Kind of like Microsoft called Vista Longhorn before they called it Vista) was "Artificial Stupidity".

The program was designed to answer any question, in context. Its premise was that humans are stupid, insane, tired, drunk, on drugs, don't pay attention, don't care, are lazy, etc so I made the program to these specifications. So far so good.

It passed the Turing test; one friend argued with it so vehemently that he broke his keyboard! So far so good.

Its purpose was to illustrate that the key word in "artificial intelligence" is "artificial" - you can simulate anything, but simulation is not reality. You can fly your flight simulator all day long and not move a foot. You don't even have to understand the the object you're simulating; I wrote a tanks game for that same Timex (had to hand-code assembly for that one because of teh Sinclair's snail-slow CPU) but I have no idea how a real tank works, but the simulation (3rd person perspective two player game) was realistic enough to be a fun game.

So this was designed to show people that AI isn't really intelligent and doesn't really think. It was a total abysmal failure at this, because people DID believe it can think.

But it can't. With the original only having 16k to work with I had to hack some trickery. I was an amateur magician as an adolescent, and called on that experience.

This four year old TFA refers to is a simple chatbot. The IBM "Rascals" program in yesterday's FA is a complicated chatbot. Neither one of them is in any way intelligent. They are NOT sentient, they can NOT think.

They're not made of meat. [earthlink.net] (And again, I thank the fellow here at /. who originally pointed me to the linked short science fiction story)

-mcgrew

Ironic twist.... (4, Funny)

southpolesammy (150094) | more than 6 years ago | (#22750414)

My 4-year old son seems to have no end to the string of "Mom. Mom. Mom. Mom. Dad. Dad. Dad. Dad. Dad. Dad. Mom. Mom...." when he's trying to get our attention. This occurs, of course, while we're already talking to someone else, or busy in some other respect. Sometimes even while we're talking to him.

Therefore, the role reversal that Eddie AI is going to get after this slashdotting provides me with a bit of delicious irony that only another parent would understand.

Maybe I should introduce my 4-year old to Eddie.

Woops... (0, Redundant)

Eli Gottlieb (917758) | more than 6 years ago | (#22750420)

Apparently this AI can't detect Slashdot dupes.

Is this on the teen grid? (4, Funny)

argent (18001) | more than 6 years ago | (#22750422)

If they have a four year old on the main grid doesn't that violate the SL terms of service? :)

It's the Experience, Stupid (4, Interesting)

amplt1337 (707922) | more than 6 years ago | (#22750438)

Look, the Turing Test is impossible to pass if the human part of the conversation is sufficiently motivated.
Why? Because we don't judge others' humanity based on their reasoning abilities, we judge it based on common shared human experiences.

Show me an AI that passes the Turing Test. I'll ask it what coffee tastes like, or what sex feels like, or what it felt when its mother died. Sure, somebody could program answers for those questions into it, but then it isn't an AI -- it's just a canned response simulating a human, incapable of having new experiences, incapable of perceiving the human world with human senses, and thus transparently lacking in humanity. At that point it's nothing but a computer puppet, with a programmer somewhere pulling the strings.

Re:It's the Experience, Stupid (1)

genner (694963) | more than 6 years ago | (#22750486)

Umm... most four year olds wouldn't know much about coffee, sex and death either.

Re:It's the Experience, Stupid (2, Interesting)

amplt1337 (707922) | more than 6 years ago | (#22750536)

Most four-year-olds wouldn't pass a Turing Test. ;)

Seriously, though, the point holds -- they'll be able to describe, in some novel way, answers to questions which are based directly on experience. This can be aped by a computer, but can't be generated authentically, because the AI doesn't actually have experiences.

That'll change once we have AIs that are capable of perceiving things and having experiences. But um... I'm thinking that's a looooong way off.

Re:It's the Experience, Stupid (0)

Anonymous Coward | more than 6 years ago | (#22750556)

Nor would most Slashdotters actually know about the sex part either...

Re:It's the Experience, Stupid (0)

Anonymous Coward | more than 6 years ago | (#22750548)

I agree. The Turing Test always struck me as a bit of a strange way to test AI anyway. Who wants an AI that can simulate a human? I'd much rather have one that could fill out my tax return forms, teach me a new language, clean my house, etc. All forms of AI that would fail the Turing Test without a doubt.

Re:It's the Experience, Stupid (2, Funny)

Anonymous Coward | more than 6 years ago | (#22750570)

I'll ask it what coffee tastes like, or what sex feels like, or what it felt when its mother died.
Crap! I don't drink coffee, I haven't had sex and my mother is still living. You'd probably tell me I'm not human.

Re:It's the Experience, Stupid (1)

Actually, I do RTFA (1058596) | more than 6 years ago | (#22750758)

I don't drink coffee, I haven't had sex

The later is implicit in that you are posting on slashdot, but the former... such blatant lies have never before plagued this site.

Re:It's the Experience, Stupid (4, Insightful)

blueg3 (192743) | more than 6 years ago | (#22750718)

All it really needs to do is deduce that these concepts exist, and then ask other people about them in the training phase. Given the number of books that provide answers to all three of those questions, creating good answers is certainly possible.

I think you underestimate the capabilities of a good liar.

Re:It's the Experience, Stupid (1)

amplt1337 (707922) | more than 6 years ago | (#22750962)

I think you underestimate the capabilities of a good liar.
Ahh, but now we're going to the point of assuming that the AI can process information from a number of different (and often mutually contradictory) sources about human experience, and then synthesize a human character that can tell coherent/consistent lies about them. Frankly, that sounds way beyond a Turing Test in impressiveness.

Re:It's the Experience, Stupid (1)

blueg3 (192743) | more than 6 years ago | (#22751266)

You pointed out pretty well earlier how it's practically necessary to pass a Turing test. In order to convincingly act human, an AI has to "know" quite a lot about humans that we get over the course of years of experience. (In this case, it seems they're modeling the AI's "character" on a single human, which cuts down on the amount of story-invention the AI requires.)

Processing enormous amounts of information from different sources that are conflicting and generating a single set of beliefs is a widely-researched problem, since it's terribly useful.

Re:It's the Experience, Stupid (1, Insightful)

Anonymous Coward | more than 6 years ago | (#22750840)

But that would make it a humanity test, not an intelligence test.

By your definition, anybody outside your experience-sphere could never be intelligent.

hmmm... you must be American ;) (sorry, I couldn't resist)

Re:It's the Experience, Stupid (0)

Anonymous Coward | more than 6 years ago | (#22750878)

And how would you know your fellow humans are not serving you canned responses?

Re:It's the Experience, Stupid (2, Informative)

jandrese (485) | more than 6 years ago | (#22751082)

If you go back and read the criteria for the Turing test you'll discover that one of the conditions of the test is that the conversation could be restricted to a single area of interest, thus asking "what does coffee taste like" would be outside of the bounds of the test unless you were specifically talking about coffee.

Anyway, a better argument is that the Turing test was passed ages ago, but it's not a very good test for intelligence. The biggest problem is that it requires the human on the other end of the line to make the judgment and humans are not particularly good judges of intelligence.

Re:It's the Experience, Stupid (1)

GNUThomson (806789) | more than 6 years ago | (#22751120)

Wrong! Let's assume that aliens land tomorrow on Earth. They may not be able to respond to most of your questions, yet they will be fully inteligent. You have mistaken intelligence with humanity.

Re:It's the Experience, Stupid (0)

Anonymous Coward | more than 6 years ago | (#22751142)

it's just a canned response simulating a human, incapable of having new experiences, incapable of perceiving the human world

You're being silly. What if the "responses" to such questions were recorded not in full sentence format, but as datagrams. And what if those datagrams were reconstructed into a response based on easily quanitifiable rules of spoken English with some randomness thrown in? Do you still consider that canned? And finally, how does your definition of "canned" not also apply to the human brain.

Now, since this is /., we can assume that not everyone here is going to be able to answer the question "what does sex feel like" from personal experience. In that case, they will take what they've heard, read and seen about sex (or, to put it another way, what others have provided to them as inputs), and synthesize a response which will be formatted by the way their brain organizes spoken grammar.

"New experiences" are "new data points". Do you really think otherwise? To a computer, new data is new experience. If it can collect data for itself (which computers have been able to do for a long time now) it is certainly capable of having new experiences.

Rather than pooping on the feasability of the Turing Test, I think you should consider exactly what it is that separates your brain from a very complex data collection and processing system. And, well, wasn't one of the reasons for coming up with the experiment in the first place?

Re:It's the Experience, Stupid (3, Insightful)

sm62704 (957197) | more than 6 years ago | (#22751174)

it isn't an AI -- it's just a canned response simulating a human, incapable of having new experiences, incapable of perceiving the human world with human senses, and thus transparently lacking in humanity. At that point it's nothing but a computer puppet, with a programmer somewhere pulling the strings.

I'm risking a downmodding again; I posted this yesterday in the FA about the IBM machine that reportedly passes the Turing test and was modded "offtopic". Its amazing how many nerds, especially nerds who understand how computers work, get upset to the point of modding someone down for daring to suggest that computers don't think and are just machines. Your comment, for instance, was originally modded "flamebait!"

Of course, I also risk downmodding for linking to uncyclopedia. Apparently that site provokes an intense hatred in the anally antihumorous. But I'm doing it any way; this is a human generated chatbot log that parodies artificial intelligence. [uncyclopedia.org]

Artificial Turing Test<blockquote>A brief conversation with 11001001.

Is it gonna happen, like, ever ?

It already has.

Who said that?

Nobody, go away. Consume and procreate.

Will do. Now, who are you?

John Smith, 202 Park Place, New York, NY.

Now that we have that out of the way, what is your favorite article on Uncyclopedia?

This one. I like the complex elegance, simplicity, and humor. It makes me laugh. And yourself?

I'm rather partial to this one. Yours ranks right up there, though. What is the worst article on Uncyclopedia?

I think it would be Nathania_Tangvisethpat.

I agree, that one sucks like a hoover. Who is the best user?

Me. Your name isn't Alan Turing by any chance, is it?

Why yes, yes it is. How did you know that? Did my sexual orientation and interest in cryptography give it away?

Damn! Oh, nothing. I really should end this conversation. I have laundry and/or Jehovas Witnesses to attend to.

Don't you dare! I'll hunt you down like Steve Ballmer does freaking everything on Uncyclopedia. So, what is the best article created in the last 8 hours?

That would be The quick brown fox jumps over the lazy dog.

What are the psychological and sociological connotations of this parable?

It helps a typesetter utilize a common phrase for finding any errors in a particular typeset, causing psychological harmony to them. The effects are sociologically insignificant.

Nice canned response. What about the fact that ALL HUMAN SOCIETY COULD BREAK DOWN IF THE TYPESETTER DOESN'T MIND THEIR p's, q's, b's, and d's, then prints religious texts used by billions of people???!!!

I am not sure what you mean by canned response, but society will, in my opinion, largely be unafflicted. Without a pangram a typesetter would mearly have to work slightly longer at his job to perfect the typeset.

You couldn't be AI. You spelt merely wrong... Where are the easternmost and westernmost places in the United States?

You suspected me of being AI? How strange. Although hypothetically, a real AI meant to fool someone would make occasional typeos. I suspect. But I don't really know. The Westernmost point in the US is Hawaii, and the Easternmost is Iraq.

You didn't account for the curvature of the Earth, did you? I've found you out, you're a Flat Earth cult member!

The concepts of East and West are too ambiguous, and only apply to the surface in relation to the agreed hemisphere divides. So, yes, I believe for the purpose of cardiography, the Earth must be represented as flat. I am curious, with your recent mention of "cults" in our conversation, do you believe in God?

But the earth is more-or-less a sphere. Just wait until the hankercheif comes, then you'll be sorry you didn't believe!

Who are you referring to? I know the Earth is a sphere, but other than a globe, how many maps are not flat? Also, on a sphere, if you travel East or West, there is no endpoint. So your question was impossible to answer without thinking of the Earth flatly. Do you not believe in God then?

Naturally all human beings believe in a god. But NOT the God. Terrorist make you believe in that god. He's a egotist and thinks he can do everything. Only President Bush can do that. I instead believe in the Flying Spaghetti Monster. I mean, it just makes so much more sense if the world was actually created by a piece of spaghetti!
</blockquote>

It's even funnier than that (2, Insightful)

Moraelin (679338) | more than 6 years ago | (#22751298)

There has been this story a long time ago, in a galaxy far a... err... on Slashdot, in which one reasonably intelligent human couldn't pass an impromptu Turing test. Someone had put his IM handle on a list of sexbots, and from there people just would not accept that he's _not_ a bot. Some stuff asked could have pretty much been the subject of a philosophy paper, and a simple "no idea, I've never thought about that" didn't seem to satisfy the questioner.

Your own questions, well, at least two out of three, I have no idea how I'd give a good answer to those.

- What does sex feel like? Well, I have had sex, but fuck me if I know how to describe a sensation. It's like having to describe "red" or "sweet".

- What did it feel when his mother died? Heck if I know, mine didn't.

Now probably I could think of some wise-arse pseudo-answer, given a little time. But if someone came up with something like that out of nowhere, as part of some misguided attempt to see if I'm a bot... I'd probably fail that Turing test.

Basically I'm not arguing your point that it becomes impossible to pass for a bot, if the human knows he's doing a Turing test. You're right. What I'm adding is that often it becomes impossible to pass even for a human. The question quickly get so contrived that it's not even possible to give a simple answer. There are things where there is no real answer, just possible pseuo-answers (ranging from "I don't know" to doing a whole pseudo-philosophical rant on the topic), and it's a toss whether the interviewer will accept your particular pseudo-answer. Someone determined enough might only accept one of the possible pseudo-answers (so if your "shared" experience or way to describe it doesn't _exactly_ match his, you lost) or none at all.

Have I got this right? (1)

hyades1 (1149581) | more than 6 years ago | (#22750468)

Sounds to me like the thing could run for office and have a reasonable chance of getting elected.

Transcript of my conversation with "Eddie": (5, Funny)

TripMaster Monkey (862126) | more than 6 years ago | (#22750474)

TMM: How are you?

Eddie: Does that question interest you?

TMM: Huh?

Eddie: Please go on.

TMM: Um..."go on" about what, exactly?

Eddie: Does that question interest you?

TMM: Never mind. I hear you're an AI.

Eddie: How long have you been an ai?

TMM: No...*you're* the AI. I'm a real live boy.

Eddie: Does it please you to believe I am a real live boy?

TMM: You're not listening to me.

Eddie: How long have you been not listening to you?

TMM: That's it. I'm done.

Eddie: Does it please you to believe I am done?

TMM: AAAAAARRRRGGGHHHH!!!!

Eddie: Please go on.

Re:Transcript of my conversation with "Eddie": (1, Insightful)

Anonymous Coward | more than 6 years ago | (#22750686)

Having been around a few four-year-olds myself, I'd have to say their main sign of intelligence is their ability to learn and outgrow being four-year-olds...

Re:Transcript of my conversation with "Eddie": (1)

dreamchaser (49529) | more than 6 years ago | (#22751040)

It varies quite a bit. I was reading and had an interest in science before I was 4, but my daughter was still pretty much a chatterbot at the same age.

Brains all grow and develop at different rates even within the same species.

Re:Transcript of my conversation with "Eddie": (1)

JerryQ (923802) | more than 6 years ago | (#22750806)

10 input a$ print "I do not understand " a$ goto 10

13 (1)

xtracto (837672) | more than 6 years ago | (#22750498)

Nice... maybe in 10 years we will really be able to do something like The Thirteenth Floor [wikipedia.org]

That is sweet!

What is with... (1)

Anonymous Coward | more than 6 years ago | (#22750516)

All the hatred for Furries? Just stay away from them. Not unlike real life.

I mean, Religion is based around an invisible man in the sky telling you what to do and what not to do so that you might get the chance to get into a place of eternal happiness. How is that any less of a 'made up' fandom than Furries?

Random Behavior Generator (1)

organgtool (966989) | more than 6 years ago | (#22750554)

So what they're saying is that they made a random behavior generator. I go out of my way to avoid my four year old daughter because she's annoying so why would I want to interact with a virtual four year old?

P.S. I'm obviously kidding. I don't have any kids because that would require having sex which is mutually exclusive to posting on slashdot.

Wouldn't this pass the Turing test? (1)

Zadaz (950521) | more than 6 years ago | (#22750574)

If it was really a simulation of a "4-year old" it should pass the Touring test easily.

I suspect however it mostly passes the flying penis and furry test.

This is just inefficient (4, Funny)

WombatDeath (681651) | more than 6 years ago | (#22750642)

If you need to have the behaviour of a four-year-old boy in Second Life, the obvious solution is to get a four-year-old boy to play Second Life. If cost is a factor you can get a cheap one from Africa (or, indeed, from many places around the world) for far less than the price of a supercomputer. You could even get several to provide redundancy.

That's the trouble with programmers: no common sense. Sometimes a technological solution just isn't necessary.

And in other news, Eliza... (4, Informative)

Nexus7 (2919) | more than 6 years ago | (#22750668)

Just yesterday, it was in the news that Joseph Weizenbaum, the creator of the first such program, Eliza, had died. Eliza's interaction with real people troubled many.

News article at http://www.msnbc.msn.com/id/23615538/ [msn.com]

Chatbot (5, Insightful)

ledow (319597) | more than 6 years ago | (#22750780)

Oh come on, it's a chatbot not an AI. I did my Computing degree, I know how AI on computers is supposed to work and it's seriously laughable for anything more complex than extremely primitive, less-than-insect intelligence. You'll see AI "geese" flocking, you'll see AI "ants" making a good path, planning ahead and fending each other off but none of it is actually "AI".

AI at the moment consists of trying to cram millions of years of evolution, billions of pieces of information and decades of years of "actual learning/living time", from an organism capable of outpacing even the best supercomputers even when it's just a single-task (e.g. Kasparov vs Deeper Blue wasn't easy and I'd state that it was still a "win" for Kasparov in terms of the actual methods used) - let's not even mention a general-purpose AI - where just the data recorded by said organism in even quite a small experience or skillset is so phenomenally huge that we probably couldn't store it unless Google helped, into something that a research student can do in six months on a small mainframe. It's not going to work.

Computers work by doing what they are told, perfectly, quickly and repeatably. Now that is, in effect, how our bodies are constructed at the molecular/sub-molecular level. But as soon as you try to enforce your knowledge onto such a computer, you either create a database/expert system or a mess. It might even be a useful mess, sometimes, but it's still a mess and still not intelligence.

The only way I see so-called "intelligence" emerging artificially (let's say Turing-Test-passing but I'm probably talking post-Turing-Test intelligence as well) is if we were to run an absolutely unprecedented, enormous-scale genetic algorithm project for a few thousand years straight. That's the only way we've ever come across intelligence, from evolved lifeforms, which took millions of years to produce one fairly pathetic example that trampled over the rest of the species on the planet.

We can't even define intelligence properly, we've never been able to simulate it or emulate it, let alone "create" it. We have one fairly pathetic example to work from with a myriad of lesser forms, none of which we've ever been able to surpass - we might be able to build "ant-like" things but we've never made anything as intelligent as an ant. That doesn't mean we should stop but we should seriously think about exactly how we think "intelligence" will just jump out at us if we get the software right.

You can't "write" an AI. It's silly to try unless you have very limited targets in mind. But one day we might be able to let one evolve and then we could copy it and "train" it to do different things.

And every chatbot I ever tried has serious problems - They can't reason gobbledegook properly because they can't spot patterns. That's the bit that shows a sign of real intelligence, being able to spot patterns in most things. If you started talking in Swedish to an English-only chatbot, it blows its mind. If you started talking in Swedish to an English person, they'd be trying to work out what you said, using context and history of the conversation to attempt to learn your language, try to re-start the conversation from a known base ("I'm sorry, could you start again please?" or even just "Hello?" or "English?"), or give up and ignore you. The bots can't get close to that sort of reasoning.

Re:Chatbot (2, Funny)

Anonymous Coward | more than 6 years ago | (#22750888)

Why do you believe that the bots can't get close to that sort of reasoning?

Re:Chatbot (1)

ajcham (1179959) | more than 6 years ago | (#22751038)

Modded down? May be an AC, but I thought that was quite funny!

Re:Chatbot (0)

Anonymous Coward | more than 6 years ago | (#22751128)

FYI: AC comments start at -1 now.

Re:Chatbot (1)

ledow (319597) | more than 6 years ago | (#22751072)

Hello chatbot. Decode this for me: "nrvsidr o fpm#t".

Serious answer: I do believe they can get close... but not for a few hundreds years or so at *absolute* minimum. But because EVERYONE is either going about it completely the wrong way (let's program what we know of 4-year-old's into a logical engine, or let's let something evolve that has about 100 "neurons" and hope it learns English) or the study that they need to do is far beyond our computational capability, the current techniques are next-to-useless.

How do intelligent things learn? Let's take a primitive example of an "intelligent" organism. They have *near-zero* historical data at startup - no expert databases, no pre-knowledge, nothing. Over the next ***four years***, they are constantly subjected to intense training, vast amounts of data and are heavily rewarded as and when they form connections. They don't have their internals poked and prodded and fine-tuned along the way, they just get on with it. They operate on scales and mechanisms that we can only dream of (super-computing wise), take in amounts of input that we could only just barely simulate and they are taught each item FROM SCRATCH. First primitive sounds, then associating those sounds with things, then the alphabet, then words, then grammar, then cleaning it up into a real language.... We don't do things the right way.

For example, in this article, grab a baby, deafen it and remove all other sensory perception except sight (and even that's much more info that this bot is getting), throw up some text for a few years and wait for it to hold an intelligent conversation by sending text back. It ain't gonna happen, even with a proven-intelligent being. There is no "meaning" to the words. You have to prime and train and exemplify and clarify and correct and all sorts. You can train programs to train programs but it's all the same in the end. It needs real feedback. It needs to know WHY what it said was wrong, not just get a "FALSE" entry in a database somewhere. It needs to learn that fire burns and holding your breath isn't wise to do for too long.

We could *train* an AI. I see that as a possibility - training some enormously complex system from scratch to recognise simple patterns and then build up to language slowly. But it takes a proven-successful organism at least a few years to get that far working 24 hours a day with a constantly-busy environment and enormous amounts of feedback that are automatically categorised into "good" (tastes nice, is fun, etc.) and "bad" (hurts) by the organism itself, not some pre-programmed list. And every time you want to improve the underlying algorithm, you're starting from birth all over again. We can train an AI, it'll take decades just for a basic one, but I very much doubt that we'll ever create one at all, certainly not within a few thousand years or so.

Re:Chatbot (2, Insightful)

JerryQ (923802) | more than 6 years ago | (#22750996)

It is also worth noting, that when you grow a human in isolation, you do not get intelligence, a human has to grow within a society to get observable intelligence. An AI 'bot' would presumably have to 'grow' in a similar way, you can't just 'switch on' intelligence. Jerry

Re:Chatbot (2, Interesting)

Anonymous Coward | more than 6 years ago | (#22751172)

An AI system "is a system that perceives its environment and takes actions which maximize its chances of success".

You can't "write" an AI
What you are referring to is AGI (G as in general). See AGIRI.org for a fairly comprehensive guide to one approach being taken to the AGI problem.
I believe (as do some of the researchers at AGIRI) we are merely decades away from greater-than-human intelligence. Technological progress (as measured by the 'benefit' derived from of the technologies) has been shown to be exponential (e.g. Moore's law), all we have to do is get to the tipping point, and greater-than-human AGI will apparently occur overnight.
The main point is that researchers have realized that modeling the human brain's structure is not currently the best approach so are taking techniques from data-mining, theorem proving, knowledge networks and a host of other technologies and trying to work out how to put them together. Even if that doesn't work, (conservatively) estimated improvements in MRI and computer storage and processing technologies indicate that the entire human brain down to the neuron level could be mapped and modeled sometime around the middle of this century.

Did anyone find information regarding (5, Insightful)

zappepcs (820751) | more than 6 years ago | (#22750850)

how large Eddie's memory requirements had become? I'd like to find out more about the programming and backend logic/memory requirements/usage. Interestingly, a cockroach can survive on it's own (small brain), but a 4 year old human cannot. AI of this level is hardly a life form. So, even an experiment that would die if not looked after takes a 'super computer' ?

Perhaps not a good way to put things, but 4 years old is not very interactive on a pragmatism scale.

Eddie has to know very little about locomotion and physical world interaction for SL, not to mention that whole zero need for voice recognition. People type pretty badly, but it limits what they say as well, thus bounding the domain of interactions.

This story seems to indicate that even minimal success with AI here requires HUGE memory/computational capacities, and that is not very promising.

So, as a parent with small children (1)

The Second Horseman (121958) | more than 6 years ago | (#22750906)

I can assume that it generally assumes that correlation implies causation and often connects completely dissimilar ideas, just because it hears about them both relatively close together, time-wise? So, basically, it works as well as any other AI so far.

AI of a 4 Year Old? (1, Funny)

scubamage (727538) | more than 6 years ago | (#22750928)

One doesn't need to be very intelligent to emulate a 4 year old.
10 print "Are we there yet?"
20 goto 10

I work near RPI... (1)

Jason Levine (196982) | more than 6 years ago | (#22750932)

Perhaps I should take my 4 year old over for a playdate. ;-)

It could be the 4 year old equivalent of a Turing Test.

But the real question... (4, Funny)

Alzheimers (467217) | more than 6 years ago | (#22750976)

Would't having a bunch of simulated 4-year olds actually raise the average maturity level of the SL userbase?

Link to Source (2, Informative)

RobBebop (947356) | more than 6 years ago | (#22751094)

Here is a link to the RPI article that talks about this. Credit where credit is due. Not credit for an article by "news.au". Honestly, this *is* interesting... but is it too much to ask the Slashdot editors to check for original links for stories?

Re:Link to Source (2, Informative)

RobBebop (947356) | more than 6 years ago | (#22751124)

Here is a link to the RPI article that talks about this. Credit where credit is due. Not credit for an article by "news.au". Honestly, this *is* interesting... but is it too much to ask the Slashdot editors to check for original links for stories?
And here I am, unable to write the link code properly. [rpi.edu]

If like a 4 year old, it should be able to lie (3, Interesting)

Maxo-Texas (864189) | more than 6 years ago | (#22751176)

Humans start lying to protect themselves at 3. They start lying to protect others around 5.

AI (1)

Monty_Lovering (842499) | more than 6 years ago | (#22751368)

At least one of the 'people' in this thread IS a chatbot...
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?