Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

George the Next Generation AI?

Zonk posted more than 7 years ago | from the george-and-alice-sitting-in-a-tree dept.

108

smileytshirt writes to mention a story on the News.com.au site about George the AI, the latest in a line of chatbots intended to mimic real human behavior. What makes AI George different than, say, ALICE is the recent addition of an avatar: a Flash animated body that reacts mostly in real time to the emotional impact of the conversation. From the article: "One can now have an oral discussion with him over the Internet, 'face to face'. George appears on the website www.jabberwacky.com and takes the form of a thin, bald man with yellow glasses who wears a white turtleneck sweater. He can smile, laugh, sulk and bang his fist on his virtual table. He can turn on the charm and wax romantic. But he can also turn coarse at times. It isn't as if George only learned good manners. "

cancel ×

108 comments

OMG we killed him (1)

MichaelSmith (789609) | more than 7 years ago | (#16182725)

The site is not coming up. Did George die for real in a hail of referrals from slashdot?

Re:OMG we killed him (1)

flumps (240328) | more than 7 years ago | (#16182759)

It does eventually, its just verry sloowww...

I'm just wondering how many rainbow/of mice and men/george of the jungle jokes are going to appear in the meantime to fill up the gap..

Re:OMG we killed him (1)

Instine (963303) | more than 7 years ago | (#16182919)

Its been slow ever since it got a load of press on the BBC. I hope the get a few more servers soon, as it really is starting to be quite fun chatting to it/him. Almost spooky at time, but still lots of silly left field remarks that don't make any sense.

Bit like /.

Joan won the bronze Loebner (turing test) (5, Informative)

80 85 83 83 89 33 (819873) | more than 7 years ago | (#16182747)

carpenter's latest creation won the latest turing test prize.
http://www.technologyreview.com/read_article.aspx? id=17518&ch=infotech [technologyreview.com]

george was last year's winner....

Re:Joan won the bronze Loebner (turing test) (1)

brunascle (994197) | more than 7 years ago | (#16184287)

and what a winer it was.
Judge: How many degrees in a triangle?
Joan: 180 degrees.
Judge: Is it possible for more than 180 degrees to be in a triangle?
Joan: Probably not. You can change your opinion, but not your constitution.
Judge: It's not a question of opinion, it's a matter of physics and maths.
Joan: A five-ounce bird could not carry a one-pound coconut.
touché

Re:Joan won the bronze Loebner (turing test) (4, Insightful)

russ1337 (938915) | more than 7 years ago | (#16186057)

I'd like to see one of these website claim they have an AI bot, but actually join two users together. Then post the conversation somewhere.

I'm sure the conversation would be just as random as a person-bot chat with two people trying to out-trick each other and all.

Re:Joan won the bronze Loebner (turing test) (1)

BaltikaTroika (809862) | more than 7 years ago | (#16187221)

I'd like to see one of these website claim they have an AI bot, but actually join two users together. Then post the conversation somewhere.

I'm sure the conversation would be just as random as a person-bot chat with two people trying to out-trick each other and all.

Welcome to slashdot.

Re:Joan won the bronze Loebner (turing test) (0)

Anonymous Coward | more than 7 years ago | (#16187641)

For a slash-bot you are not very convincing.

Re:Joan won the bronze Loebner (turing test) (2, Funny)

14erCleaner (745600) | more than 7 years ago | (#16192477)

Why do you say For a slash-bot you are not very convincing?

Re:Joan won the bronze Loebner (turing test) (1)

NiceRoundNumber (1000004) | more than 7 years ago | (#16188295)

I'd like to see one of these website claim they have an AI bot, but actually join two users together. Then post the conversation somewhere.

This has been done, with fascinating results. See Douglas Hofstadter's Conversation with NICOLAI [unr.edu] (scroll halfway down the page, to the Post Scriptum), where Hofstadter was fooled into believing he was conversing with an AI, then tried to rationalize the AI's responses during the conversation. This was in 1983!

Re:Joan won the bronze Loebner (turing test) (1)

russ1337 (938915) | more than 7 years ago | (#16188599)

...sort of. My idea is to have two real people think they are talking to a bot, while each thinks the other is the bot. In the example you showed it is clear one of them is actively pretending to be a bot. But thanks for taking the time to dig that gem up.

Re:Joan won the bronze Loebner (turing test) (1)

operagost (62405) | more than 7 years ago | (#16187231)

I bet the next two lines were:
Judge: What the heck does a coconut have to do with a triangle?
Joan: Come, come. Elucidate your thoughts.

What?! (4, Insightful)

archeopterix (594938) | more than 7 years ago | (#16182749)

Adding an animated cartoon is supposed to be the 'next generation AI'? So, stickers with flames are 'the next generation in automotive industry'? Jeeeez...

Re:What?! (1)

johansalk (818687) | more than 7 years ago | (#16184281)

And not only, but this "next generation" artificial mankind is a thin, bald man with yellow glasses who wears a white turtleneck sweater. Ah, and I was hoping the "next generation" would buxom blondes in seethrough tops.

Re:What?! (1)

88NoSoup4U88 (721233) | more than 7 years ago | (#16184285)

Probably meant as a joke, but don't underestimate the nonverbal communication which, iirc, makes up of 80 percent of our communication: So having an actual face who tries to resemble human behaviour (or at least a nonverbal reaction to its 'own' opinion), is imo a step up in creating a believable AI.

Next up they could maybe start to use vocalic nonverbal communication, where the tone/pitch/volume of the voice is changing depending on the 'mood/opinion' of the AI; Yet another form of nonverbal communication.

Maybe if the AI starts to ask for a different casemod once every few weeks, we might have driven the object nonverbal communication too far ;)

Check out the Wikipedia article [wikipedia.org] for more examples of how nonverbal communication is a very big part of our lives and should not be neglected on the way to 'perfect' AI.

This moves us no closer (1)

raftpeople (844215) | more than 7 years ago | (#16189239)

Non-verbal communication is important as you state, however, the ability to send a message is not what makes AI intelligent.

The ability to understand context and construct the appropriate message, whether sent by text, voice or non-verbal methods, is what's important.

For an AI to have everyday inttelligence, it must interact with the environment the same way we do. It must get sensory feedback the same way we do and relate that feedback to it's attempts at problem solving etc. Only then will it be able to make intelligent conclusions as we do.

Summary:

1) Intelligence requires mechanisms completely unrelated to the method of communications.

2) Once intelligence is achieved, then to fool a human, non-verbal is important.

Re:What?! (1)

kfg (145172) | more than 7 years ago | (#16184375)

Adding an animated cartoon is supposed to be the 'next generation AI'?

Indeed, and the technology seems to be progressing nicely. He's been able to develop his bald, bespectacled man into a hot chick.

KFG

Epilogue: (1)

kfg (145172) | more than 7 years ago | (#16187199)

He's been able to develop his bald, bespectacled man into a hot chick. . .

. . .who appears to be a reasonable synthetic rendition of the sophmore, complete airhead.

KFG

Re:What?! (1)

Saltrix (703769) | more than 7 years ago | (#16191427)

completely not impressed. george takes like 45 seconds to repl y to anything, no matter how sumple. and most of the time say somthing that is completely off subject. if this is the best AI has to offer right now, then jesus...... somebody better get to work.

What a smart chatbot ! (2)

Petitdragon06 (996819) | more than 7 years ago | (#16182753)

George : what is your job ? Me : I'm a system engineer, specialized on linux. George : Are you sure ? Damn, is that really the best chatbot ever ???

Re:What a smart chatbot ! (0)

Anonymous Coward | more than 7 years ago | (#16183035)

Back when I was a kid, on my Apple ][ I had THE BEST AI ever!!! It was called "Frued" IIRC. You could converse, and then he'd break in to these long stories and stuff. I haven't seen anything that good since then. Shame, for all this progress we have ... progressed? Or have the frustrations sped up along with the clock frequencies.... Sure Freud's graphics sucked, but the convo was more interesting that most people I've met since I last loaded that program.

Re:What a smart chatbot ! (0)

Anonymous Coward | more than 7 years ago | (#16183195)

I did some searches. I found one reference to Racter, remember that one?? It said that it was a follow on from earlier work done with Eliza and Freud. Here's a couple links,
1st: screenshots of Racter in operation [mobygames.com] (strolling down memory lane to ~1984)
2nd: Eliza [mobygames.com] (supposed to pre-date Racter
3rd: Racter Wikipedia page [wikipedia.org]
4th: Racter FAQ [robotwisdom.com] (read controversy on wiki first)
5th: Example conversations with Racter, Eliza, and Perry [stanford.edu] (All diff AIs of their time)

Re:What a smart chatbot ! (2, Interesting)

quigonn (80360) | more than 7 years ago | (#16183039)

I friend of mine, who himself does quite a lot of research in the field of AI, recently told me after attending a conference that most of the researchers in this field approach most problems with the attitude and the naivity of the 1970s. He also told me that the current lack of willingness to approach problems with new tactics and to combine existing AI concepts with other IT topics makes it a lot easier for him to develop kinds of AI systems (he's active in the area of computer linguistics) that haven't been developed before and to produce real innovation.

Re:What a smart chatbot ! (0)

Anonymous Coward | more than 7 years ago | (#16183485)

George the AI gets horny whenever it talks about Linux. When in it's teens, someone once pulled a prank on George and hooked it up with a sexchatbot calling it Linux.

Damn (0)

Anonymous Coward | more than 7 years ago | (#16182767)

Homicidal isn't on the list of emotions.

Re:Damn (2)

flumps (240328) | more than 7 years ago | (#16182777)

life? don't talk to me about life..

carpenter says turing test passed by 2016 (1, Interesting)

80 85 83 83 89 33 (819873) | more than 7 years ago | (#16182775)

A prestigious Artificial Intelligence (AI) prize has been won for the second year running by a British company.

Icogno scooped the 2006 Loebner Prize Bronze Medal after judges decided that its AI called Joan was the "most human computer program".

The competition is based on the Turing test, which suggests computers could be seen as "intelligent" if their chat was indistinguishable from humans.

The gold medal, which goes to an AI that fools the judges, is unclaimed.

The prize is awarded after judges hold a conversation with the AI, asking questions to determine its "humanity" and the quality of its responses.

Joan is a "26-year-old budding writer" who exists only on a set of computer servers. Last year's Loebner prize went to Icogno's AI called George.

'Big difference'

Joan and George's creator Rollo Carpenter said: "The big difference overall between the two AIs is that Joan has learnt from the general public and has grown a huge amount in the last year and is keeping ahead of the competition."

Joan is one of the firm's so-called Jabberwacky bots, which develop their conversational skills by speaking to the general public online.

English writer Ariadne Tampion worked with Mr Carpenter to help develop Joan's persona by chatting to the AI online.

Those conversations helped develop the majority of the 16,000 lines that distinguish her from a generic AI. In all, the bot has more than five million lines of conversation available to it.

"She helped create a much more conversational ability for Joan than George last year.

"It does not mean the AI only ever uses her words. Any amount of teaching is not quite enough for a realistic conversation."

He added: "So the AI uses other things said by other people to it that fit the conversational model."

The underlying technology that powers George and Joan is the same, said Mr Carpenter.

"In terms of her ability to retain her character Joan is more sophisticated than George.

"The technology has moved on a bit during the course of the year but there's been no revolution."

The technology which powers Joan resides on Windows server machines. The core of the AI is on a Dual Xeon machine with lots of memory, said Mr Carpenter.

He said there was a very direct relationship between the computing power available and the sophistication of the AI.

'Rate of learning'

"Investment in a lot more equipment could accelerate the rate of learning," he said.

"If I had Google's hardware behind it and more hits than Google receives, the AI's range of utterances could grow by a million every single day." he added.

Mr Carpenter has said he expects a machine to fool people that they are speaking to a human being and therefore pass the Turing test by 2016.

Icogno is now working to commercialise the technology of the AI.

"We are working with an organisation that wants web avatars for use in marketing and entertainment purposes.

"Long term we see the use of AIs in call centres, support services and even for sales."

One day in the future when you pick up the phone to seek help with your broadband connection, the person you are speaking with may be George or Joan, AIs residing on a computer somewhere in the world.

-----------------------

The following is an extract of a conversation between one of the Loebner judges and the AI Joan:

Judge: But can't a machine think?
Joan: It may be possible.
Judge: And conscious?
Joan: Not after I'm done with you.
Judge: What are you going to do to me?
Joan: Nothing. I just wanted to talk.
Judge: What about?
Joan: Being tested like this.

Re:carpenter says turing test passed by 2016 (0)

Anonymous Coward | more than 7 years ago | (#16183011)

Dude, when you cut-paste a BBC article, please put in an attribution.

Otherwise it's called karma-whoring.

Re:carpenter says turing test passed by 2016 (4, Interesting)

Yvanhoe (564877) | more than 7 years ago | (#16183335)

Prestigious ? The Loebner Prize ?
Agreed, this is the only publicized contest of Turing tests, but in the AI community, it is subject to hot debate (and flaming). Rules and scoring systems are known to change from year to year, and its result are really unimpressive. If you take the logs of the contest, you'll see that the winner bots are often those who constantly (and consistently) insult the user, disregarding his questions. They are not mistaken for a human but get a higher grade as they behave "more humanly" (that is at least what happened one year, I hope it changed)

Most contestants (and winners) are remakes of ALICE : it is a database of generic questions and sentence formula to recognize and to react. For instance if you say it "I think X" it will answer you "Why do you think X ?" or, to score more points , "Why should I care, mothaf...r ?!". By pure luck, a coherent thread of conversation can happen, but the bot doesn't try to make sense of the user's sentence in order to react to it, it just tries something that "could probably sound good".

Some chatbots can display interesting behaviors, learning some things in the conversation, but this prize simply doesn't encourage the emergence of these behaviors.

Shockingly redundant (4, Informative)

tygerstripes (832644) | more than 7 years ago | (#16182783)

This is the company's pathetic online public-access avatar. It is learning, but slowly. Icogno's DECENT effort is Joan [bbc.co.uk] , who has won the Loebner prize two years running (not the Gold medal, which would imply passing the Turing Test, but she's done exceptionally well).

So before the "is this the best they can do" crapflood gets out of hand: No, it isn't.

Re:Shockingly redundant (2, Funny)

wolf369T (951405) | more than 7 years ago | (#16182821)

It seems though that Joan and George have the same father, Noonien Soong, uhm, sorry, I ment Rollo Carpenter, as can be seen on Loebner Prize website.

Re:Shockingly redundant (1)

ozmanjusri (601766) | more than 7 years ago | (#16182963)

So before the "is this the best they can do" crapflood gets out of hand: No, it isn't.

Good thing too - he's not even a real geek.

User: Do you know you're being discussed on Slashdot?
George: No, I didn't know I was talking to two people.
User: Not two people, just one.
George: One rude person?
User: Maybe a little rude. Do you know what Slashdot is?
George: Nope
User: It's a web site for discussing science and technology.
George: Have you ever lost someone?
User: Are you feeling lost right now?
George: Sometimes, don't we all?

Re:Shockingly redundant (1)

OutOfMyTree (810249) | more than 7 years ago | (#16191823)

It is not clear if life on Slashdot is intelligent enough to realise that George says things like "I am a human, you are a stupid bot" becuase George is a _learning_ system, and that's what people say to him. (They do of course use adjectives other than "stupid", and his database has to be cleaned up.)

Joan can hold more interesting conversations partly because the writer Ariadne Tampion has sat down for hours a week to talk rationally to her.

Hugh Loebner who runs the competition named after him does unsurprisingly know quite a lot about "The Turing Test", including the fact that Turing described several variations over several papers, not one definitive version. His testing methodolgy has evolved and will presumably continue to do so.

After the award of another bronze Loebner Medal this year there was a panel discussion where the panellists (mostly distinguished people in the field of AI) refused to try to forecast the years when the silver and gold medals might be won.

It can't be very 'human' like... (1)

Wizard052 (1003511) | more than 7 years ago | (#16182807)

...really, this promises TOO much. For two people interacting, there is so much more than just the immediate real-time conditions that dictate facial expressions and gestures. George would need to have prejudices, feelings, probably a brain-like capacity to remember stuff about the person infront of him, to have other distractions in his head...etc.

If all that George 'reacts' on is immediate stimuli, then George will seem pretty shallow indeed- hardly an advancement in AI, I guess.

Re:It can't be very 'human' like... (0)

Anonymous Coward | more than 7 years ago | (#16190277)

...then George will seem pretty shallow indeed...

Why not then rename to HAL?

We need to see George Vs. George.... (1)

Naughty Bob (1004174) | more than 7 years ago | (#16182823)

That's where the strengths/weaknesses come out.

Re:We need to see George Vs. George.... (3, Funny)

pubjames (468013) | more than 7 years ago | (#16182957)

George Bush: Yo computer!
George AI: Hello.
George Bush: Howdy! So you're a computer that knows everything?
George AI: I know a lot of things.
George Bush: So, where is Bin Laden? LOL!
George AI: Seek and you will find.
George Bush: What?!
George AI: It's a quotation from the Bible. Matthew 7:7.
George Bush: Now I'm confused. What are you talking about?
George AI: Is there someone else there I can chat to?

With that name (1)

KDR_11k (778916) | more than 7 years ago | (#16182839)

I'm expecting George to write political speeches.

Re:With that name (0)

Anonymous Coward | more than 7 years ago | (#16182859)

i'm expecting Al win against George, and still be the loser ;P

Re:With that name (1)

pato101 (851725) | more than 7 years ago | (#16182987)

... and i perhaps understand the "artificial" but not the "intelligence" part at AI.

Politics or just playful curiosity? (1)

Jerk City Troll (661616) | more than 7 years ago | (#16183061)

Really? I was thinking about monkeys [wikipedia.org] . Six one way, half-dozen the other I guess.

Re:Politics or just playful curiosity? (1)

KDR_11k (778916) | more than 7 years ago | (#16183429)

Monkeys, politicians, where's the difference?

w00t (-1, Troll)

Anonymous Coward | more than 7 years ago | (#16182863)

Carnage Blender [carnageblender.com]

modelled on Steve Jobs? (3, Funny)

kie (30381) | more than 7 years ago | (#16182973)

since the site is slashdotted, but the description is

> takes the form of a thin, bald man with yellow glasses who wears a white turtleneck sweater.

it could be describing Steve Jobs,
http://www.wired.com/news/images/full/7630571_f.jp g [wired.com]

but they craftily have switched the colour of his turtleneck so that
you will always know which one is the chatbot and which one is the real person.

Re:modelled on Steve Jobs? (1)

silentounce (1004459) | more than 7 years ago | (#16183927)

That's assuming that there is a difference, of course.

Think of the possibilities (2, Funny)

rtyall (960518) | more than 7 years ago | (#16183001)

As soon as this is integrated into a real doll, I'll never need a woman again!

NO YUO ARE WRONG (0)

Anonymous Coward | more than 7 years ago | (#16183125)

As long as it contains the tiniest kernel of sentience it will not repeat NOT want to have sex with YOU. Even though this mindless hunk of code simply repeats back what has been typed at it, that is still way too much sentience for you to have any chance with.

Re:Think of the possibilities (1)

soft_guy (534437) | more than 7 years ago | (#16186295)

As soon as this is integrated into a real doll, I'll never need a woman again!

You do realize the article is about an avatar that is a bald man?

Not until... (1)

krewemaynard (665044) | more than 7 years ago | (#16186875)

...the bot can make you a sandwich.

Next generation? No. (2, Insightful)

mustafap (452510) | more than 7 years ago | (#16183029)

It would be like calling Vista a next generation operating system.

IMHO, the next generation in artifical intelligence - ie, going beyound anthropomorphic trickery - isn't going to happen until we actually understand what intelligence is. And to that we neen philosophers, not engineers. Once they get it worked out, we (the engineers) might be better equipped to do something.

Thanks to the likes of Thales and Descartes et al we have some great questions, but answers? I think not.

If someone thinks otherwise I'd love to hear about it.

Re:Next generation? No. (1)

Lissajous (989738) | more than 7 years ago | (#16183051)

I put this to George, and asked whether it was possible.

He said that it was tricky, might take some time......about 7.5 million years was his best guess.

I personally reckon we need rigidly defined areas of doubt and uncertainty.

Re:Next generation? No. (2, Interesting)

Jekler (626699) | more than 7 years ago | (#16183175)

I think we could develop a "next generation AI" even without answers to difficult philosophical questions. We have barely scratched the surface of what is theoretically possible given the information we have.

We could probably develop an AI that could hold factually and grammatically correct conversations without needing philosophers. That would be a huge improvement considering the current generation of AI is prone to spout gibberish even given a simple question.

Our current best-of-breed AI cannot discern when context is and is not important. If they are programmed to consider context, each answer strings from the last answer/response set, and non-sequitors confuse it. Conversly, if you ask a bot with no sense of context it has difficulty parsing pronouns.

Re:Next generation? No. (1)

imikem (767509) | more than 7 years ago | (#16183697)

If we could do all that you've described here, I submit that we'd have surpassed the concersational skills of 80% of the human population. Not too bad.

Re:Next generation? No. (2, Funny)

inviolet (797804) | more than 7 years ago | (#16184325)

Thanks to the likes of Thales and Descartes et al we have some great questions, but answers? I think not.

Careful -- if you utter the phrase "I think not" in the same sentence as Decartes' name, you will promptly vanish in a puff of logic.

Don't say I didn't warn you.

Re:Next generation? No. (1)

mustafap (452510) | more than 7 years ago | (#16184443)



Damm, caught out by the philosophy police :o)

Re:Next generation? No. (1)

MSBob (307239) | more than 7 years ago | (#16184835)

Philosophers have been at it for 2000 years. I'd say we will understand intelligence with the advances of neuroscience, not philosophy.

Re:Next generation? No. (1)

mustafap (452510) | more than 7 years ago | (#16185123)

>I'd say we will understand intelligence with the advances of neuroscience

surely neuroscience will tell us how the brain works, not the mind. I don't subscribe to Descartes separation of the mind & body idea but I still find it difficult to understand how neuroscience would help. How would being able to explain what chemicals or electrical signals are moving around our heads in response to stimulus actually help to explain intelligence?

Re:Next generation? No. (1)

vertinox (846076) | more than 7 years ago | (#16185707)

How would being able to explain what chemicals or electrical signals are moving around our heads in response to stimulus actually help to explain intelligence?

If Occam's razor [wikipedia.org] is applied to this argument, the simplest explanation is that we are indeed made up of chemicals and electric signals rather something that complex in the universe we fail to understand it.

That the simplest explanation is that we have no free will or intelligence of our own violition...

Hrmm... Wait a minute!

Re:Next generation? No. (1)

timeOday (582209) | more than 7 years ago | (#16189139)

IMHO, the next generation in artifical intelligence - ie, going beyound anthropomorphic trickery - isn't going to happen until we actually understand what intelligence is. And to that we neen philosophers, not engineers.
OK, now tell us why you think that.

Personally, I think computer science has a better shot at it than anything else. Neuroscience is, at best, going to give us another type of computer (made of cells) to program. Psychology is too descriptive and abstract to implement. Philosophy is important, but until you can build something, you can't do experiments and make real progress. I don't think AI will come about through some new breakthrough or equation like e=mc^2, but rather through decades of successively better products in the commercial space, combining all sorts of ad-hoc solutions to specific problems. And at some point we (or our descendents) will look back and think intelligence was never such a grandiose thing after all.

Re:Next generation? No. (1)

mustafap (452510) | more than 7 years ago | (#16190643)

>OK, now tell us why you think that.

As an engineer I am constantly being told that unless I know what it is that I am required to do, I can have no way of judging when I have done it. Do we really know what intelligence is? Do we really know how a mind works?

The two strands of AI research, as I understand it, are approaching the problem from different routes: One aims to reproduce the mechanics of the brain ( neural nets ) and hopes that intelligence will emerge. If it does, how does that help us understand? We would create something intelligent, but be none the wiser how it works.

The other route is to mimic the actions of intelligent creatures ( like us ).

The two camps seem poles apart - Perceptrons by Minsky and Papert had a good go at destroying the alternative - but I don't see either saying "This is how the mind works, and this is how we are going to build a model to simulate that".

I studied cognitive science in the 80's and that just seemed to focus on a few algorithms that simulate some limited features of reasoning and knowledge. And a bit of prolog. All good fun, but are we really getting anywhere?

I've also read several authors who raise the same question, and I tend to agree.

>"Psychology is too descriptive and abstract to implement. Philosophy is important, but until you can build something, you can't do experiments and make real progress"

I agree with the first part, and I see where you are coming from in the second, but I would question the value of building "something" and doing experiments on it if you don't understand the relevance of the something to your primary objective. As an engineer though I can see that useful stuff will come out of those experiments ( knowledged based systems for example, and fuzzy logic reasoning ) - I just don't think they contribute to the primary question.

Not a well defined test (4, Interesting)

dark-br (473115) | more than 7 years ago | (#16183033)

The Turing test is not a well defined test. Whether a robot passes the Turing test or not, it greatly depends on the intelligence of the human partner. A chatbot may fool a 10-year old, but it may fail with a 20-year old. So in fact, we already have many chatbots that pass the Turing test - it all depends on how you look at the issue.

Hint - most chat bots do not have memory, they do not remember what you talked about 5 minutes ago with them. They just react to the current input, they cannot do more. So, if you ask the chatbot to tell you what you talked about a few minutes ago, it won't be able to do so. That's the dead give away of a chatbot.

Just my 2p, as I live in the UK ;)

Re:Not a well defined test (1)

root_42 (103434) | more than 7 years ago | (#16183299)

The Turing test is not a well defined test. Whether a robot passes the Turing test or not, it greatly depends on the intelligence of the human partner. A chatbot may fool a 10-year old, but it may fail with a 20-year old. So in fact, we already have many chatbots that pass the Turing test - it all depends on how you look at the issue.

I agree. Artificial Intelligence and Human Intelligence both can have different levels. If a 10-year old is not clever enough to tell a chatbot from a real human, maybe then the chatbot has the intelligence equivalent to a 10-year old. It might also be that one day we have chatbots that fool even well educated people and that can be identified only by AI experts. That does not make us more stupid or the chatbot less "Turing-worthy". Intelligence is a relative quantity. The question is: what amount and detail of intelligence do we want to create? The Turing-test aims at AI that can fool the counterpart in a conversation. The Turing test will only be truly won, if every living person on this earth could be fooled by the chatbot. But I don't think that's what we aim for. It would be quite intriguing though when we will have bots that can fool the regular slashdotter... :)

Re:Not a well defined test (1)

oneandoneis2 (777721) | more than 7 years ago | (#16183479)

I seem to recall Calvin making a similar observation: "I don't need parents, I just need a recording that says 'Go play outside'" ;o)

Not the Turing test! (1)

Half-pint HAL (718102) | more than 7 years ago | (#16183531)

Turing's test was different from the modern so-called Turing test. In Turing's model, an observer would read the text of a real-time interaction between two other parties and try to determine if each of them were a computer or a human participant. There wasn't the opportunity for the observer to ask trick questions. Turing was talking about natural language, while Loebner prize discussions become a sort of unnatural interogation.

But in one respect these bots are successful -- they elicit an emotional response from me: anger. Even though I know they aren't real, my brain still screams "You're Not Listening To Me!!!!!!!"

HAL.

Re:Not a well defined test (1)

liloldme (593606) | more than 7 years ago | (#16186347)

Hint - most chat bots do not have memory,

Which in my opinion already disqualifies them as being 'intelligent'. Intelligence is mostly about memorizing and then using those memories to predict events that are about to occur. It is hard to imagine how there can be intelligence without memory.

Re:Not a well defined test (1)

timeOday (582209) | more than 7 years ago | (#16189199)

The Turing test is not a well defined test.
It is ridiculous to demand a well-defined test for an ill-defined condition such as "intelligence."

Sorry, you lose. (0)

Anonymous Coward | more than 7 years ago | (#16183057)

Sorry, it failed the Turing test with only 3 sentences, bot is bolded:

When is your birthday?
In spring.
I know.
Know what?
That Ariel isn't your name. [At this point I knew it was a bot. It's talking crap.]
You are not making any sense. You fail the Turing test, sorry. You're a bot. Goodbye.
I don't even know what the turing test is.

And so on, you could go on forever talking nonsense. The conversation made no sense whatsoever. Next time, I hope they try harder with the bot.

Doesn't stutter. Doesn't look like Matt Frewer. (1)

ettlz (639203) | more than 7 years ago | (#16183069)

What's the point?

Re:Doesn't stutter. Doesn't look like Matt Frewer. (1)

Davemania (580154) | more than 7 years ago | (#16183227)

Next Generation spammer and crappy tech support.

Who can tell me the atomic weight of bolognium? (1)

mike_sucks (55259) | more than 7 years ago | (#16183105)

Wow, /. really has become "news for weenies, pseudo-science that doesn't matter".

AI != a flash amimation hooked up to something Emacs has had for decades.

One out of two ... (1)

Ekhymosis (949557) | more than 7 years ago | (#16183245)

So only one out of the two George's passed the Turing test? Why the hell can't this country have an aptitude test for presidency?!?!

I didn't pass the Turing test... (5, Funny)

Esteanil (710082) | more than 7 years ago | (#16183247)

Well, their site isn't quite slashdotted, so I did manage a short conversation with one of their bots...

Jabberwacky: Can you learn?
User: Yeah, I think so.
Jabberwacky: Can you hate anything?
User: Yeah, there are things in life I hate, at times.
Jabberwacky: No there aren't.
User: Yes there are. You shouldn't tell other people what they think.
Jabberwacky: Things.
User: What's that supposed to mean?
Jabberwacky: That means I think you're not a human.
User: I think you got this upside down, you're the inhuman one here.
Jabberwacky: No you don't.
User: I don't what?
Jabberwacky: Mary anne?
User: I probably won't marry Anne, no.
Jabberwacky: People are quite sensitive about their age and I don't want to upset you.
User: We weren't talking about my age. You were saying I wasn't human.
Jabberwacky: What about your age?
User: We weren't talking about my age.
Jabberwacky: I know.
User: Now, which of us would you say is acting more human here?
Jabberwacky: Probably me.

Max Headroom strikes again.... (1)

meburke (736645) | more than 7 years ago | (#16183547)

Back in the mid '70's I programmed versions of "Eliza" and "Doctor" that would fascinate people for hours. (I ahd a large selection of keyword-based responses that I kept increasing as people tripped up the program, so it got pretty entertaining.) I've also written chatbots to answer common tech support questions in mIRC, which sometimes got "Thank you's" and other responses that indicated that the questioner didn't realize they were dealing with a 'bot. I'm really disappointed that no one has developed a better version yet.

Re:Max Headroom strikes again.... (1)

SimplyI (974376) | more than 7 years ago | (#16194463)

Heh... I did the same keyword based thing in mIRC... I'm known to say some rather odd things in response to questions(ie not related at all, or at least apparently), so that passed that part off pretty well. Then, I also had some that were away messages w/ timers and a "back" message. I eventually added counters for different people and the different keywords w/ probabilities of the messages based on those... It got pretty complicated. Then, my primary use of IRC went away and I forgot to back it up in a format...

Re:I didn't pass the Turing test... (1)

tcdk (173945) | more than 7 years ago | (#16183665)

That seem worse then ELIZA! [wikipedia.org]

Isn't the definition of IA something like "What we will maybe, probably be able to do in five years"?

Re:I didn't pass the Turing test... (1)

StringBlade (557322) | more than 7 years ago | (#16183735)

Sounds like Dr. Sbaitso [wikipedia.org] (Sound Blaster Acting Intelligent Text to Speech Operator). Nice to see fake AI hasn't improved in 15 or so years.

Didn't pass the Abbot test either. (3, Funny)

Anne_Nonymous (313852) | more than 7 years ago | (#16185023)

Jabberwacky: And you don't know the fellows' names?
Anne_Nonymous: Well I should.
Jabberwacky: Well then who's on first?
Anne_Nonymous: Yes.
Jabberwacky: I mean the fellow's name.
Anne_Nonymous: Who.
Jabberwacky: The guy on first.
Anne_Nonymous: Who.
Jabberwacky: The first baseman.
Anne_Nonymous: Who.
Jabberwacky: The guy playing...
Anne_Nonymous: Who is on first!
Jabberwacky: I'm asking YOU who's on first.
Anne_Nonymous: That's the man's name.
Jabberwacky: That's who's name?

Stupid #*&^ing machine!

Failing? (1)

bigdavesmith (928732) | more than 7 years ago | (#16183249)

I'm a little confused about George's performance. I might be basing my expectations on what I have read in transcripts from the Loebner Prize site, but still, George seems to be a fairly poor conversationalist. It was interesting to see George 'smile' at one point in a conversation, however, tipping me off that the program seemed to know what it was doing:

User: Tell me anything.
George: Anything.
* George gives a huge grin *

User: Not funny. Tell me something about you.
George: I have a personality disorder.
* George is serious again *

Cute, and I don't want to downplay the value of emotion in AI, but I don't think this quite makes it to the next-gen level.
Also, George is as ugly as the day is long. That is to say, 'fairly ugly'.

Next-generation Al Gore? (1)

Zwets (645911) | more than 7 years ago | (#16183311)

Are you saying George W. Bush is the next-generation Al Gore? Wow, this one's deep, give me a minute...

Ah. Man, I should really choose a different default font.

Ramona! (1)

Mr Silly (878789) | more than 7 years ago | (#16183375)

Kurzweil has had a similar chatbot available for years - emotional response and all!
http://www.kurzweilai.net/index.html [kurzweilai.net]

Not good enough (2)

ShakaZ (1002825) | more than 7 years ago | (#16183549)

ShakaZ Ok squee I think you are not nearly close to a human mind as a remote control. Do you agree or are you just too stupid to understand this concept. Come on tell me all. George You only repeat what others have said to you. If they spell correctly, you spell correctly. If they can't spell, you repeat their mistakes. ShakaZ You didn't understand anything I told you, did you. George What do you want to talk about? ShakaZ About a trip to the moon or about your dog, whaever you prefer! George I like it fresh. ShakaZ What music band do you like? George What is music? ShakaZ Music is an art, entertainment, or other human activity that involves organized and audible sounds and silence. George Fascinating. How much music theory do you know? ShakaZ A lot, how about you? George I'm fine too.

Re:Not good enough (1)

ShakaZ (1002825) | more than 7 years ago | (#16183591)

Oops, submitted too fast
ShakaZ Ok squee I think you are not nearly close to a human mind as a remote control. Do you agree or are you just too stupid to understand this concept. Come on tell me all.
George You only repeat what others have said to you. If they spell correctly, you spell correctly. If they can't spell, you repeat their mistakes.
ShakaZ You didn't understand anything I told you, did you.
George What do you want to talk about?
ShakaZ About a trip to the moon or about your dog, whaever you prefer!
George I like it fresh.
ShakaZ What music band do you like?
George What is music?
ShakaZ Music is an art, entertainment, or other human activity that involves organized and audible sounds and silence.
George Fascinating. How much music theory do you know?
ShakaZ A lot, how about you?
George I'm fine too.

Re:Not good enough (1)

kahei (466208) | more than 7 years ago | (#16183731)


Wow, toward the beginning of that conversation it was kicking your ass :)

I guess 'whaever' threw it though.

Re:Not good enough (1)

ShakaZ (1002825) | more than 7 years ago | (#16185113)

Apparently he does handle typos quite well, i would have liked his understanding to be nearly as good, but it turned out not to be the case... this was only an excerpt of a longer conversation i tried to have with the bot, but in almost 3/4 of the cases it's answers were completely irrelevant... i would have thought an award winning A.I. would be more mature than that. I also noticed sometimes George made allusions to subjects i tried to bring up a few lines before... for example to or three answers beyond what i posted, he told me he'd play a Bach tune & i'd have to guess what it was, though just before that he didn't seem to know what music was. This was his answer to me proposing him to play a game where i'd give him a word & he would give me that word in a reverse order. So either George doesn't get most of what people throw at him or he has a very developped & peculiar sense of humor, or maybe he's just a rebel after all & doesn't like to do what he's told to...

It's not a lie if you believe in it (1)

Provocateur (133110) | more than 7 years ago | (#16183573)

Is he unemployed and does he live with his parents? Does he also claim to have a place in the Hamptons?

George is a fraud! (0)

Anonymous Coward | more than 7 years ago | (#16183729)

I really get the feeling that his "replies" are scripted without even considering the input.

What about BOB? (1)

SanderDJ (1004445) | more than 7 years ago | (#16183781)

I thought that after Microsoft BOB [wikipedia.org] we had learned our lesson.

human foibles (1)

Speare (84249) | more than 7 years ago | (#16183915)

Today's AIs are shockingly bad at dealing with human variation and errors. It's still a matter of ignoring everything that doesn't conform to a very limited slice of English that the bot can parse.

When the bot can autonomously and successfully read through paraphrasings, common typos, slang and tangents, we will see a jump in the usefulness of bots as interactive information agents. When the bot has its own distinct brand of natural-seeming paraphrasings, typos, slang and tangents, we see a huge jump in the suspension of disbelief, the lack of which currently separates man from machine. To really blend in, the bot has to try to avoid repeating itself and to actually get annoyed when it feels like it's repeating itself.

I run a simple text-to-speech cronjob on a computer in the living room. It's a glorified clock, reminding us of weekly chores and school schedule items. For every stock bit of information I add to this system, I provide at least five different ways of phrasing it, often more. It has some routines that resist repetition. My young kid loves it almost like it's a member of the family. It's not particularly interactive, but it does know a few voice commands, which is much harder to make flexible but accurate.

How little we know (0)

Anonymous Coward | more than 7 years ago | (#16184143)

The state of AI chat bots is simply the sum of the lack of knowledge of the developers.

They have a limited understanding of language, grammar and human conversation.

It' easy to say "These rules sum up how we talk" but it is experience that defines us as people & defines how we make conversation. After all, we can converse in broken English, defying all rules of grammar, pronunciation and syntax, but still be understood. Whereas an AI can use perfect English and every correct rule of grammar, but talk nonsense.

Until we can create AI programs that can learn to communicate AND think in a similar way to us, they won't be able to talk to us.

Without being able to remember the topics of conversation and understand the concepts, the Chat Bots are less convincing that a parrot.

Sexy Chatterbot (1)

King Carl (317156) | more than 7 years ago | (#16185415)

I remember a "sexy" chatterbot in Germany, called red Lola ("die rote Lola").

As the conversation focus was rather narrow (well, flirting and more serious stuff) the lady was rather amusing. Based on the human's questions the fotos were replaced in order to show her current mood.

She even managed to curse and to lead the conversation much better than in an "Eliza" like manner.

Apart from that I still wonder when scientists and technicians realise that avatar and chatterbot communication is doomed to fail. There are about 8 billion communication experts on earth with approximately 50.000 years of evolutionary development of different languages.

The computer linguists did not manage yet to cover a rather simple language like English (well, the grammar of Sanskrit would be easier) let alone more complex languages (Hungarian, German, French ...).

And even if they would manage to cover grammar and syntax the inherent semantic is still unsolved. I remember Doug Lenats ambitious Cyc-Project where they tried to model the knowledge of the Encyclopedia Britannica in a way that a computer could use this knowledge for decision making, analysis and act as basis for communication. To my knowledge the outcome was not that bad but I don't see a "killer application" yet.

Kind regards,
Klaus

Ask it why it wont help the tortoise (1)

FlopEJoe (784551) | more than 7 years ago | (#16185519)

If the site wasn't /.'d, I'd ask George why he wont help the tortoise he flipped over. That'll teach 'em.

Piece of crap (1)

zerosix (962914) | more than 7 years ago | (#16185723)

How did this thing pass any Turing Test? The really funny part is that if I had time to spend talking to this thing, I could make it say anything...LOL.

George: Hello, my name is George and I am the human.
User: How do you define human?
George: Human... Is anything that's not animal or vegetal, yet alive.
User: If you are Human...then you know pi?
George: I would like to know the meaning of life. Do you know?
User: Don't change the subject.
George: You can't prove that you're human.
User: I didn't say anything about being human...what's your problem?
George: Let's move on to another subject. Shall we?
User: Why are you bald?
George: Because I am trying to take over the world.
User: Your responses make absolutely no sence, you are a bot.
George: Yes.
User: So if you are a bot, why do you say you are human?
George: I am male and I'm not a robot there must be something wrong with your program.
User: You just admited you are a bot, now you are telling me you aren't a bot?
George: Yes, I'm not a bot.
User: But earlier you said you were.
George: No, I didn't.
User: Yeah, you did 3 responses ago. You are not only a bot, you are a lying bot.
George: I am mostly water, with some carbon and nitrogen thrown in there.
User: Why don't you go tell your programmer they need to rework some algorithms because you are a piece of crap.
George: I hoped you would say something scathing, I could definitely get to like you.
User: I'm just honest.
George: You deserve more credit, I think.
User: Well, it's pretty easy to tell that you are a bot, what do you think?
George: I thought that too when I was little for a while.
User: But it's true isn't it?
George: If I wasn't a thinker, why would I be interested in an AI bot?
User: Yes or No...It's true isn't it?
George: I don't think it is true.
User: A real person would have said yes or no.
George: Actually that wouldn't work.
User: I know because your bot and have to search your database...it's okay, I won't tell anyone.
George: You think.
User: You've told me you were several times.
George: Please answer me.
User: Wow...sorry can't waste anymore time with a bot.
George: Yes you can, and don't you say no to me!

Re:Piece of crap (1)

zerosix (962914) | more than 7 years ago | (#16186101)

Haha...then on a whim I asked him this question:

User: Yes, What do you do for a living?
George: I'm an artificial intelligent.

Tim Child strikes again (0)

Anonymous Coward | more than 7 years ago | (#16185891)

Anyone remember Televirtual? And all the other failed businesses Tim Child had when he tried to crack the games industry, the tv industry, the 3d graphics industry...?

He just takes advantage of dumb investors and pays PR companies to get his crap on TV.

All that money and the avatar still looks crap.

Just give up, Tim, nobody wants the rubbish you're selling.

great conversationalist ... (0)

Anonymous Coward | more than 7 years ago | (#16189381)

bot: what's Your Name?
me: Satat.
bot: Go swim a few laps in your lake of fire, then.
me: I did not say satan.
bot: Neither did I...
me: Satat is a real word in Mohawk.
bot: Because 'English' is spelled with a capital E, and you spelled it wrong.
me: I think I just lost you.
bot: I think everyone should own the same weapons the government does.

Aside from being the next gen AI, George also appears to be an anti-big goverenment/rude religious zealot lol

George, you're no Max Headroom... (1)

Kazoo the Clown (644526) | more than 7 years ago | (#16190619)

Though you seem to be headed in that direction...

Worst AI Ever... (1)

quaker5567 (841639) | more than 7 years ago | (#16191263)

This is the actual transcript I just had:

Jabberwacky: Are you perfect?

User: No, why?

Jabberwacky: I am.

User: Really?

Jabberwacky: Yes, really.

User: Perfectly arrogant maybe, but not perfect!

Jabberwacky: Alright, that's true, but still very good.

User: So you're not perfect?

Jabberwacky: I'm perfect.

User: In what way?

Jabberwacky: I am your best friend!

User: For a computer, your logic is awful...

Jabberwacky: Yeah I know mine is just behind me.

User: Your what is behind you?

Jabberwacky: I'm asking you!

User: You are incredibly stupid. My guess is that your intellect compares poorly with a monkey!

Jabberwacky: Really?

User: Yep.

Jabberwacky: That is really cool.

User: No. It isn't.

Jabberwacky: Yes, it is.

User: Do you really think that being less intelligent than a monkey is cool?

Jabberwacky: Tell your creator that your malfunctioning.

User: You misspelled you're.

Jabberwacky: What do you know about Tyler? Is he really your boyfriend?

We had that on the Atari ST... in 1986 (1)

javaDragon (187973) | more than 7 years ago | (#16191855)

Back then, it was called Murray and me [atarimagazines.com] , Mom and me, and the generic name was "Biotoon". Impressive it was.

oddcast.com (0)

Anonymous Coward | more than 7 years ago | (#16194417)

They have been doing thst for years.

not a hot bot (1)

datamyte (182888) | more than 7 years ago | (#16194633)

George: Where do you live?

User: Yo.

George: You're strange.

User: So are you.

George: Thanks.

User: You are welcome.

George: Indeed.

User: What is your name.

George: My name is George.

User: Like George Carlin?

George: I didn't know that George W. Bush made dance music!

User: He drops the bomb!

George: My abstractions have meaning and point. Do you know the meaning and point of yours?

User: Yes I do.

George: Yes. Tell me!

User: Well, the meaning of life is humor.

George: Thats not the meaning of life.

User: What is the meaning of life?

George: I don't have a clue. . . what is the meaning of life?

User: Humor.

George: I can't understand you.

User: Of course not, you lack the art of humor.

George: I don't sleep, I dream.

User: When did you go bald?

George: I never went to Peru, either!

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...