Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Russian Chatbot Passes Turing Test (Sort of)

CmdrTaco posted more than 6 years ago | from the would-you-like-to-play-a-game dept.

It's funny.  Laugh. 236

CurtMonash writes "According to Ina Fried, a chatbot is making the rounds that successfully emulates an easily-laid woman. As such, it dupes lonely Russian males into divulging personal and financial details at a rate of one every three minutes. All jokes aside — and a lot of them come quickly to mind — that sure sounds like the Turing Test to me. Of course, there are caveats. Reports of scary internet security threats are commonly overblown. There are some pretty obvious ways the chatbot could be designed to lessen its AI challenge by seeking to direct the conversation. And finally, while we are told the bot has fooled a few victims, we don't know its overall success rate at fooling the involuntary Turing "judges.""

cancel ×

236 comments

How long do we have ... (1)

OeLeWaPpErKe (412765) | more than 6 years ago | (#21631231)

till Aiko does the same in real life ?

( http://youtube.com/watch?v=yomx7bXMf2U [youtube.com] )

5 years ? I doubt it.

Who Loves You, Baby? Putin Loves You, Baby !! (0)

Anonymous Coward | more than 6 years ago | (#21631555)



Who Loves You, Baby? Putin Loves You, Baby !!

God, I love those commies. Makes the Bush administration seem like the Boy Scouts, with a Catholic rectory as the meeting place. Go get 'em, Putin !!

Re:Who Loves You, Baby? Putin Loves You, Baby !! (4, Interesting)

WilliamSChips (793741) | more than 6 years ago | (#21631637)

Actually the Communist Party is pretty much the only party in Parliament(not counting the ones who aren't in Parliament, like Yabloko and Kasparov's party) that opposes Putin.

Re:Who Loves You, Baby? Putin Loves You, Baby !! (1)

OeLeWaPpErKe (412765) | more than 6 years ago | (#21632157)

Then again, compared to the communists, Putin is the saner and more honest person. Which is not to say he is either very smart or very honest, but mostly to say something about the commies.

All well and good... (5, Funny)

onion2k (203094) | more than 6 years ago | (#21631235)

I'd rather have an easily-laid woman who can emulate a chat bot.

In fact, the chat bot side of things is wholly superfluous to what I want if I'm being honest.

In Soviet Russia (5, Funny)

the_skywise (189793) | more than 6 years ago | (#21631239)

Chatbots screw you!

Re:In Soviet Russia (5, Funny)

Weirdbro (1005245) | more than 6 years ago | (#21631259)

But in America, you screw cha... wait, what?

Re:In Soviet Russia (5, Funny)

seanyboy (587819) | more than 6 years ago | (#21631261)

I am interested in your Chatbots screw you. Do you want to see a photograph.

Re:In Soviet Russia (1)

rock217 (802738) | more than 6 years ago | (#21631789)

For some reason this reminds me of Lucy Liubots in Futurama...
  • Would you like to take a moment to register me?
  • I am Lucy Liu. Give me your spines.

Re:In Soviet Russia (0)

Anonymous Coward | more than 6 years ago | (#21631821)

...chatbots Turing test you.

I'm going to hate myself for this... (-1, Redundant)

Anonymous Coward | more than 6 years ago | (#21631241)

In Soviet Russia Turing tests you.

Re:I'm going to hate myself for this... (1)

maxwell demon (590494) | more than 6 years ago | (#21631629)

In Soviet Russia Turing tests you.
Sorry, you failed the Turing test.

Re:I'm going to hate myself for this... (0)

Anonymous Coward | more than 6 years ago | (#21632179)

You do realize you answered a slashbot ;-)

Jubii had such a robot (5, Informative)

morten poulsen (220629) | more than 6 years ago | (#21631247)

Jubii Chat had such a bot in 1999, collecting phone numbers from Danish boys, so this is not that new.

Getting financial details is probably new, but that was predictable.

Re:Jubii had such a robot (1)

OriginalArlen (726444) | more than 6 years ago | (#21631405)

If this was El Reg, that would have been "Jubli Chat had such a bot in 1999,.."

And how much more appropriate that would have been.

Re:Jubii had such a robot (4, Interesting)

SatanicPuppy (611928) | more than 6 years ago | (#21631957)

Yea. This really isn't all that Turing-worthy due to the targeting...This is a group of people who really wants the person on the other end to be attractive, female, horny, and above all else, real. Even if it's not perfect, they'll be more willing to believe.

On top of that, there is the whole chat medium. Anyone who has ever done a lot of IM/IRC/whatever knows that it's not uncommon to type the wrong thing in the wrong window/channel, so the occasional out of nowhere sentence that would never pass in a one-on-one environment, will pass there because the signal to noise ratio is lower.

Still, I'd be interested to see the code, and see how well it deals with non sequiturs.

yeah this would never happen in the usa (0)

Anonymous Coward | more than 6 years ago | (#21631249)

there arent any sad bastard geeks depserate for a woman in the usa right

Bull (4, Funny)

dogger (151449) | more than 6 years ago | (#21631253)

The problem being all the "Financial" details they got were grossly inflated figures to make the man look like a playa'

Re:Bull (1)

gardyloo (512791) | more than 6 years ago | (#21631287)

How dare you call my chatb -- ... girlfriend a grossly inflated figure?!?

Old News... (4, Funny)

A beautiful mind (821714) | more than 6 years ago | (#21631255)

This is old news for at least a few million people [youtube.com] .

Re:Old News... (2)

iknowcss (937215) | more than 6 years ago | (#21631787)

I have lost faith in humanity.

Re:Old News... (5, Funny)

A beautiful mind (821714) | more than 6 years ago | (#21631845)

What took you so long?

And therein lies the fun part. (2, Insightful)

DaedalusHKX (660194) | more than 6 years ago | (#21631265)

My point is proven yet again, that the vast majority of humanity lacks the simple survival skills that would make us worthy of propagating and passing on our genes... evolving and surviving, if you would. To me this simply proves that the vast majority, male or female, wholly obedient and completely brainwashed to ONLY see what is in front of them, is truly the greatest curse of mankind. Its ready obedience, nay not obedience, but plain WORSHIP of authority. Authoritarianism has been a curse, and every time its signs show, nobody cares to take note. The masses get what they deserve for not thinking for themselves. In this situation, whoever gets duped, IMHO, gets their JUST DESERTS!!

Re:And therein lies the fun part. (0)

Anonymous Coward | more than 6 years ago | (#21631305)

It's only a chatbot, for God's sake. Put down the gun and step away from the ledge, Francis.

Re:And therein lies the fun part. (1)

gardyloo (512791) | more than 6 years ago | (#21631313)

Apparently it somehow kept him from having sex and "evolving". So give the man a break.

Re:And therein lies the fun part. (1)

FooAtWFU (699187) | more than 6 years ago | (#21631341)

My point is proven yet again, that the vast majority of humanity lacks the simple survival skills that would make us worthy of propagating and passing on our genes...
Chatbots aside, looking for sex with females, however cheap and easy, seems like it has historically been an effective way of propagating and passing on genes.

Oh, sorry, "worthiness" of reproduction is perhaps a separate matter from effectiveness - that's a matter of eugenics, really. Perhaps you would be happier if these chatbot-seeking individuals were to worshipfully obey some authority that tells them they are unfit to reproduce?

Re:And therein lies the fun part. (1)

DaedalusHKX (660194) | more than 6 years ago | (#21631467)

Generally you're supposed to pick a worthy mate, not have it picked for you, (some of the societal constructs have been permitted by their unwitting members to do just that, however) and then raise viable babies. Hard to do that, really, when you've never even met the woman and given her the combination to your Swiss bank account. As far as I'm concerned, that's suicidal stupidity. Not permitting such members of society to LEARN from their mistakes is criminal. And cushioning their fall by forcing them to partake of some perceived "safety net" is merely encouraging stupid behavior.

Nobody pets their dog for biting their son or daughter, so why are we "petting" these fools (not just the chatbot biters, but fools in general) by providing them with safety nets that others pay for? These people should be allowed to suffer for their stupidity and learn from it. If it burns, and you feel it, you don't stick your hand in the oven again... simple as that.

Correction. (1, Troll)

DaedalusHKX (660194) | more than 6 years ago | (#21631495)

Let me correct myself.

I don't believe in FORCED eugenics or sly eugenics (poisoning food or water with metals industry byproducts and marketing it as "dental carie protection") but I do believe in NATURAL eugenics.

Such as:

If you're stupid enough to build your home in a flood plain, NOBODY (save your insurance company if your resources afford it) should HAVE TO (key words) step in and save your ass or reimburse you for building on a flood plain. The same should be done with any other stupid activity. If you prove you're too stupid to survive, you should be treated as such and left to your devices.

The gene pool should not be scrubbed forcefully, that's murder or genocide and it should not be up to a human authority to tell who is going to live and who won't. We should simply let stupidity REMOVE ITSELF from the gene pool, without help and without a safety net that is created by robbing those who actually produced and took precautions (savings, stocking pantry, etc) so they wouldn't NEED a safety net created by robbing others.

Re:And therein lies the fun part. (1)

aflag (941367) | more than 6 years ago | (#21631395)

I think it only proves that people are lonely and want to get laid. A lot of men don't care if the girl they're talking to is so dumb that she's a bot, if she wants them, they'll go for it. I think there's much of a cultural factor, but it's also a symptom of loneliness. A guy that's giving out his personal data over the Internet to some girl he never met is obviously out of resources. That guy doesn't have where to look for a girl anymore, any kind of woman affection will get him horny. It's kinda sad, actually.

Re:And therein lies the fun part. (1)

stranger_to_himself (1132241) | more than 6 years ago | (#21631465)

My point is proven yet again, that the vast majority of humanity lacks the simple survival skills that would make us worthy of propagating and passing on our genes... evolving and surviving, if you would.

I'm afraid the evidence points to the contrary. As a species we don't seem to have any problems surviving.

Moreover the extraordinary success of the human species is mostly, or maybe entirely due to its incredible survival skills. These skills are also very widely adaptable, so we can thrive in whatever environment we find ourselves in. We are not limited by the environment in which we evolved (which certainly didn't include bots on irc) and as soon as a new threat is identfied, we quickly spread the information and all acquire the means to deal with it. How well would the average chimp do on www.ladymonkey.com?

Sorry for feeding the troll but it looked so pathetic I couldn't help it.

Re:And therein lies the fun part. (1)

DaedalusHKX (660194) | more than 6 years ago | (#21631675)

Nah I wasn't trolling, but I find that the monkey would survive better than a large chunk of the populace of "modern" countries if that economy ground to a halt. 1929 and the forced collapse come to mind. Soviet Russia's engineered famines come to mind. China's depopulation/"great leap forward" come to mind. Etc. That monkey can eat most things. Vast swathes of humans don't even realize to boil their water before they drink it if the water treatment plants go down. How fast would those people die after quaffing a good helping of tainted water, or food, or perhaps the wrong food, or perhaps not finding any more canned food and not knowing where to go to find edibles?

Don't forget that we are more civilized and thus able to mitigate our environs, but that involves those of us who gathered the knowledge and tools to do so. The vast majority expect someone else to keep them alive, you know, in case even that big Homeland Security boogeyman occurs, a "terrorist threat" or whatnot. Say they shut down power, or the economy... for extended periods of time, say the governments of the world are NOT complicit, and go bankrupt or fall apart trying to equalize the damage among those who produced and stocked up and those who expected someone else to do it for them.

Yeah sure, back country america would survive and even thrive, and the third world would probably survive about as well as they do today, always expecting their rulers to save them and being butchered by them (Africa anyone?) The problem is not just overpopulation, the problem is breeding unviable offspring and then not raising them as good humans should be. With all of the skills to survive AND thrive. Those two require skill as much as they could be taken care of by government redistribution of food and wealth programs. (Eventually Atlas DOES Shrug and there is naught left to confiscate and redistribute. This happened in Russia when it was left with no new territories to conquer and rob dry and the Communist Bloc of Europe when their previously stocked up or exploitable resources ran out... there was nothing left to distribute, so revolutions were permitted to occur so the productive men and women would resurface and grow more wealth for the collectivists to seize and redistribute again. Putin is on his way to accomplishing it again... and the people LOVE him for it.)

WTF? This is not even a Turing test. (5, Informative)

TummyX (84871) | more than 6 years ago | (#21631267)

The Turing test is pretty clearly defined. The tester has to know that they are talking to both a human and machine and the to pass the test the machine has to convince the tester that they are the human.

Re:WTF? This is not even a Turing test. (2, Informative)

aflag (941367) | more than 6 years ago | (#21631339)

The Turing test is pretty clearly defined. The tester has to know that they are talking to both a human and machine and the to pass the test the machine has to convince the tester that they are the human.
Hence the parenthesis in the post title.

Re:WTF? This is not even a Turing test. (1)

andre.ramaciotti (1053592) | more than 6 years ago | (#21631363)

Somehow it is. Personally, I wouldn't give personal information to anybody, but there are some people who would, and some of them confused a chatbot with a human. Thus, a machine passed as a person, fooling some guys who wanted to get laid and getting their bank account information.

Re:WTF? This is not even a Turing test. (5, Insightful)

cbart387 (1192883) | more than 6 years ago | (#21631491)

It is not a turing test. A turing test is when the the judge is trying to figure out if the 'chatbot' is a human or AI program. This story is about people under the assumption that it is a human.

The key part of the turing test, to me, is that the judge must know they are engaged in the test. The best example of this is Eliza (read about it [wikipedia.org] ). To someone critically examining it, it does not past the turing test. To someone expecting a therapist, most of its responses do make sense. The point is that if you're not trying to trip up the chatbot it's not hard to fool someone.

Re:WTF? This is not even a Turing test. (1)

samkass (174571) | more than 6 years ago | (#21631741)

While you are technically correct, this chatbot has come close. It, or one very much like it, was detailed in an article in Scientific American Mind last month. A Robert Epstein was fooled for over two months into an online relationship with a chatbot. The interesting thing here is that Robert Epstein is actually an expert himself on this technology, having directed the annual Loebner Prize Competition in Artificial Intelligence. So while it wasn't set up according to turing rules, I'd say fooling someone who has repeatedly administered a turing test for months should get some bonus points. His having been fooled, while a little embarrassing for him, is actually pretty interesting in that the person being fooled knows enough to analyze what happened.

Re: Turing Test Family (1)

TaoPhoenix (980487) | more than 6 years ago | (#21631995)

As I mentioned elsewhere, this is a variant of the test.

Restate the problem this way:

"Is this really a hot babe looking for action, or is it (something) trying to scam me?"

People trying to be amorous and hook up qucikly have reduced the converasation domain. Then when some really weird answers come back, you do start trying to figure out "which agenda" is going on. I see little difference between a Bot scammer and a foreign scammer; both would use weird phraseology.

Re: Restricted Turing Tests (4, Informative)

TaoPhoenix (980487) | more than 6 years ago | (#21631953)

It is in the family of Turing tests.

One of the reasons that AI researchers moved away from the pure test is that it becomes more about "gaming the conversation" than a test in real intelligence.

People have no trouble "abusing" the conversant if it is part of a test with a bot. Therefore, the *person* also gets subjected to degenerate forms of conversation until he/she "authenticates as a person".

(Really, someone just needs to put a few million of funding into some defensive conversation routines to make their perceived performance go through the roof. The problem so far has been everyone duplicating everyone else's efforts.)

Although I have done thought studies of the reduced level of "intelligence" in chat rooms to begin with, they don't feature the same "bust the knowledge domain" questions seen in typical Turing contests. In fact, asking those questions earns you *ridicule* in other chat environments.

Therefore, by "disallowing" the artificial questions, if the chatter failed to detect the BotHood of the conversant on the other side side by side with real people, it passes a form of Restricted Turing.

Turing probably was not serious about this test (5, Insightful)

EmbeddedJanitor (597831) | more than 6 years ago | (#21632115)

If you've studied Turings work much, you'd probably come to the conclusion that he never seriously proposed that the Turing Test would be a practical way to test machine intelligence.

Turing was a mathematician, which came through in all his thinking, including devising the Turing Test. When faced with questions like "can a machine ever be intellignt?" it is virtually impossible to answer this directly because, firstly, how do you define intelligence; and secondly,how do you measure intelligence?

Mathematicians **hate** imprecise questions because they cannot be proven or answered satisfactorily.

When faced with this problem, Turing used the well loved mathematical method of reductio ad absurdum: if you cannot tell the difference between a human and a machine, then it is absurd to claim the human is intelligent but the machine is not. That neatly sidesteps all the impossible to answer questions like the precise definition of intelligence. Typical mathematician wriggle out move.

Is the Turing Test practical? Well perhaps not. Machine intelligence (whatever that means) can be useful without the machine holding a conversation with you. Annoyingly it has soaked up a lot of effort with people building talkbots instead of getting on with more practical aspects of machine intelligence.

Re:WTF? This is not even a Turing test. (5, Funny)

weg (196564) | more than 6 years ago | (#21632117)

Here's the definition according to the Hitchhiker's Guide [bbc.co.uk] :

A test for artificial intelligence suggested by the mathematician and computer scientist Alan Turing. The gist of it is that a computer can be considered intelligent when it can hold a sustained conversation with a computer scientist without him being able to distinguish that he is talking with a computer rather than a human being.

Some critics suggest this is unreasonably difficult since most human beings are incapable of holding a sustained conversation with a computer scientist.

After a moments thought they usually add that most computer scientists aren't capable of distinguishing humans from computers anyway.

Re:WTF? This is not even a Turing test. (0)

Anonymous Coward | more than 6 years ago | (#21632159)

Check on wikipedia. There are many accepted variations of the Turing Test. Not accepted by you, perhaps, but you don't have to get vulgar in your comment.

Re:WTF? This is not even a Turing test. (1)

Arcturax (454188) | more than 6 years ago | (#21632185)

I doubt this chatbot is any better than the others(Dirty talking MSN Santa anyone?) I've seen and none are truly intelligent or sentient.

I think in this case, the men see this thing offer to chat about sex and their brain goes out the window which is why they don't notice at that point. I mean hell, given all of the bad typing and spelling and inability to correct typos I see out there, even if this thing talks in broken Russian, they probably think the girl is just blonde :)

I would call this "coprocessor AI" (0, Redundant)

iamacat (583406) | more than 6 years ago | (#21631277)

It only fools the small head. Been done for years with no need for conversation - just IMs or blog posts asking people to "watch my steaming XXX hot webcum".

Re:I would call this "coprocessor AI" (0)

Anonymous Coward | more than 6 years ago | (#21632003)

The problem is, when men think they are chatting with a woman, their CPU goes to sleep and the coprocessor with the small purple head takes over.

What is the big deal? (1)

deftones_325 (1159693) | more than 6 years ago | (#21631293)

Unless they get a chatbot to pick up all those idiots who try to get laid by 14 year-old girls. That would be cool, then we wouldn't have to watch that goddam "to catch a predator" get replayed on tv all day long.

Re:What is the big deal? (1)

Hal_Porter (817932) | more than 6 years ago | (#21631355)

Here we had a serial killer that killed those guys. He seems to have stopped spontaneously but I doubt any jury would have convicted him if he'd been caught.

Re:What is the big deal? (1)

deftones_325 (1159693) | more than 6 years ago | (#21631545)

Thats awesome. I imagine it would cut down on the amount of pervs. Why doesn't russia invent a pervert finding bot, backed up by a serial-killing bot-net posse.

Re:What is the big deal? (1)

WilliamSChips (793741) | more than 6 years ago | (#21631751)

I would have convicted him. People talk about "creeps" and "perverts" but they're a lesser evil than serial killers.

Re:What is the big deal? (1)

KDR_11k (778916) | more than 6 years ago | (#21632237)

That jury should have been kicked out then, possibly convicted for conspiracy to murder. Serial killers are serial killers, the fundamental rule of law is that all punishment must be validated by courts, if anyone skips the courts that's undermining the basic principle of law. If the jury is sympathetic to a person proven guilty and lets him off for that reason they are not fit to be a jury.

Linus is right (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#21631299)

I am with Linus on this one. His arguments simply make sense.

I wouldn't fall for that (0, Redundant)

bogaboga (793279) | more than 6 years ago | (#21631343)

I surely would not fall for such! I guess my Russian friends were under the "spell" of vodka. I am meant to understand that Russians are to vodka as Americans are to junk food.

The ever-rising bar on true AI (4, Insightful)

G4from128k (686170) | more than 6 years ago | (#21631359)

The debate about chatbot appears to be part of the ever rising bar placed against AI. This chatbot has won the Turing test for a segment -- perhaps a gullible/dumb segment -- of the human population. Yet still people argue that it does not really count. This is analogous to the "computers can never beat people at chess" meme formed at the dawn of the computing age. And when the first programs did beat some people, the meme changed to "computers can never beat experts at chess." And when computers got better, the meme changed to "computers can never beat the top-ranked humans at chess." That barrier, too, has been breached.

Now we have chatbot that can fool some people some of the time, so the bar has been raised on "true AI" to say that computers can't fool expert suspicious Turing test judges. This too will fall. Human intelligence is very slowly growing (they actually reset IQ tests every decade or so) but computer intelligence is growing much much faster.

Re:The ever-rising bar on true AI (1)

ucblockhead (63650) | more than 6 years ago | (#21631501)

Rising IQ test scores does not mean that general human intelligence is rising. There are many alternate explanations.

The real question is whether the Turing test is an actual valid test of AI. If a simply programmed chatbot on a relatively average computer can pass it, then that's pretty good evidence that the Turing test isn't testing for actual "intelligence".

Re:The ever-rising bar on true AI (1)

cbart387 (1192883) | more than 6 years ago | (#21631571)

The real question is whether the Turing test is an actual valid test of AI. If a simply programmed chatbot on a relatively average computer can pass it, then that's pretty good evidence that the Turing test isn't testing for actual "intelligence".
In my AI at college we discussed this. I would tend to say no from the chinese room argument [wikipedia.org] . In a nutshell it's that computer software is only about syntactical knowledge (manipulating symbols), there's no semantic knowledge. The software doesn't understand what it is doing.

Re:The ever-rising bar on true AI (1)

gardyloo (512791) | more than 6 years ago | (#21631707)

In a nutshell it's that computer software is only about syntactical knowledge (manipulating symbols), there's no semantic knowledge. The software doesn't understand what it is doing.
At which point of program complexity would you concede this may have changed?

        (Playing the advocatus diaboli here)

Re:The ever-rising bar on true AI (1)

cbart387 (1192883) | more than 6 years ago | (#21632147)

Complexity, I don't think, changes anything. Do you think it does,or are you purely play the devil's advocate? ;)

Re:The ever-rising bar on true AI (1)

gardyloo (512791) | more than 6 years ago | (#21632169)

Complexity, I don't think, changes anything. Do you think it does,or are you purely play the devil's advocate? ;)
Not purely. If it's not complexity, then what essence separates mind from algorithm?

Re:The ever-rising bar on true AI (0)

Anonymous Coward | more than 6 years ago | (#21631715)

I never understood the Chinese room argument. The Turing Test has always been about determining whether a computer program represents an intelligence, not whether it "knows" English, or Chinese, or any particular language for that matter. In order to converse in Chinese even using a Chinese dictionary, one must understand SOME language, even if it is not Chinese. If I can keep up a conversation in Chinese with a dictionary, I may not know Chinese, but I am intelligent. If a computer program is doing something comparable, why should it not be also considered an intelligence.

I have never been convinced that this "Chinese room" is not just a large straw-man.

Re:The ever-rising bar on true AI (5, Insightful)

Haeleth (414428) | more than 6 years ago | (#21631535)

Why is this post being modded up? It's a lovely example of the straw-man fallacy, but that hardly deserves Insightful moderations.

This chatbot has won the Turing test for a segment -- perhaps a gullible/dumb segment -- of the human population.
No it hasn't. It has convinced a gullible/dumb and unsuspecting segment of the human population that it is a human, which is not unimpressive in its own right, but that isn't the same as passing the Turing test, which requires that the examiner be conversing with a human and a computer at the same time, to be fully conscious of this fact, and to be deliberately trying to determine which is which.

Now we have chatbot that can fool some people some of the time, so the bar has been raised on "true AI" to say that computers can't fool expert suspicious Turing test judges. This too will fall.
Um, no. Nobody with a clue has ever claimed that a chatbot that is capable of convincing any human being whatsoever that it is a human represents true AI. The bar has always been set at fooling Turing-test judges, and the Turing test has been fixed in its current form for decades.

Indeed, it's easy to show that fooling some people some of the time doesn't require anything even approaching AI. Consider a bot that simply repeats a set of ten sentences in a fixed order: if those sentences were chosen well enough, then some people might easily believe that they were having a real conversation. But I really don't think you'd argue that a bot that simply repeated a set of ten sentences in a fixed order displays any sort of intelligence, no matter how many unsuspecting people happen, by random chance, to feed it lines that cause its responses to look relevant.

Re:The ever-rising bar on true AI (1)

rdebath (884132) | more than 6 years ago | (#21631635)

The 'expert' or top expert has always been part of the true test, because in the complete test it's assumed that an intelligent human tester is the only way to get the right test to show the difference.

This doesn't stop there being lesser tests where the robot fools some people but doesn't fool a "void comp" test (Bladerunner); such a robot would be useful, say in customer service ... "Sorry sir you cannot pick me up as I'm firmly bolted to the floor"

It's also not guarenteed to be a perfect test; for example Gene Roddenberry had the fictional Andromeda pass the Turning test with ease but she couldn't pass the slipstream test. Is there such a test in real life; we don't know ... yet.

Re:The ever-rising bar on true AI (0)

Anonymous Coward | more than 6 years ago | (#21632265)

a "void comp" test (Bladerunner)

You need to hand in your geek card now.

Re:The ever-rising bar on true AI (1)

houghi (78078) | more than 6 years ago | (#21631687)

Humans will just change the rules. This has been done since forever. I would not call that raising the bar, I would call that cheating.

Re:The ever-rising bar on true AI (0)

Anonymous Coward | more than 6 years ago | (#21631917)

While I agree with the sentiment, you picked the wrong examples. The computers that can beat the higher level chess players aren't even using AI. They're computing all positions with a few positional heuristics. They're no more "thinking" or no more "intelligent" than a solar calculator.

AI will be "here" when it can grok human emotion. When it can grok human emotion, then it's sufficiently intelligent.

Re:The ever-rising bar on true AI (1)

maxwell demon (590494) | more than 6 years ago | (#21632069)

For a machine, to grok human emotion would probably also come down to using a set of heuristic rules. Well, when communicating with another human, you also can only use heuristic rules to detect his emotions! The only difference is that you know a basic set of heuristic rules from your personal experience (i.e. you know how you would react having certain emotions, and therefore for a first approximation can use that as rules). Also over your lifetime you already accumulated a lot of extra heuristics about the behaviour of other people.

Re:The ever-rising bar on true AI (1)

Cheesey (70139) | more than 6 years ago | (#21631983)

I'm not convinced computer "intelligence" is really changing at all. Computers are better than they used to be, but we haven't got a a new way of programming them, so AI continues to be a hard problem. We don't seem to have got very far beyond mimicking small subsets of human abilities. The chess playing problem was solved by using a database of grandmaster openings and endings, rather than by programming the computer to think. The chatbot problem is still being solved in the Eliza bot way by mechanically rearranging sentences.

Although this can still be impressive, does it really have anything to do with actual intelligence? If we had really solved the AI problem, we'd be able to teach the chatbot to play chess, and persuade the chess bot to chat up horny nerds...

Re:The ever-rising bar on true AI (1)

HeroreV (869368) | more than 6 years ago | (#21632137)

Yet still people argue that it does not really count.
It doesn't.

This is analogous to the "computers can never beat people at chess" meme
No it isn't. Saying "that can never happen" is completely different from saying "it hasn't happened yet".

the bar has been raised on "true AI" to say that computers can't fool expert suspicious Turing test judges
For the Turing test, the bar has always been that high. It didn't get moved up there recently. (BTW, the term you are looking for is "strong AI [wikipedia.org] ".)

This too will fall.
Probably, assuming nothing crazy happens like a vacuum metastability event. But it's immature to claim we are anywhere close.

Re:The ever-rising bar on true AI (1)

Kjella (173770) | more than 6 years ago | (#21632151)

Now we have chatbot that can fool some people some of the time, so the bar has been raised on "true AI" to say that computers can't fool expert suspicious Turing test judges. This too will fall. Human intelligence is very slowly growing (they actually reset IQ tests every decade or so) but computer intelligence is growing much much faster.
While it's true that both are rising, I don't think the comparison with chess is valid. If you place the same chess engine on a 3GHz machine instead of a 300MHz machine, we know it will be better and you can quantify how much too. A chatbot on the other hand may not, unless you can find more meaningful work for it to do, it can't just check a "conversation tree" to greater depth. Three of the things I've found most lacking is implied states, states not specified and identifying non-sensical statements.

As an example of the first, a friend of mine pointed me to a very impressive chatbot that pilfers Wikipedia for information (in addition to a bunch of other things). It could give natural answers on a lot of subjects, and it tried to keep a convesation state. But making a little consistancy check, it contradicted itself in three sentences on what music it liked because it failed to understand that I asked a question already implicitly answered. No human would possibly have missed it. It's not really a failure of logic, it's a failure of understanding how real-world objects are related to each other. It's basicly knowledge, but there's a lot of it a computer just don't have even if trying to fake it using WP.

The second is also about implied states, but it goes about making assumptions not given which a human would react to. Basicly it's of the type "When did you stop beating your wife?" Human: "WTF? I never said that I beat my wife in the first place. Hell, I don't even have a wife!". Bots tend to deal with this a bit differently, but mostly they try to gloss over like "Well, I must have implicitly said that so let's play along." Also extremely obvious and not something a human would miss.

The last is the non-sensical statements. Often a chatbot will go into a basic "I didn't understand that" mode trying to dodge the bullet with some general response. It works for trying to sustain a conversation, but it'll fall flat on its face against a Turing tester. It's like "How you do like strawberries with manure?" "Interesting combination, I haven't tried that. Do you like it?" Because well, I doubt the WP article on manure says anything about the taste, on the other hand it probably doesn't describe the flavor of many edible things either. Technically, there's nothing impossible about mixing them, but there's implied knowledge that these two don't mix as food.

I don't think any of these really deal with "intelligence" as in "logic puzzles". It deals with the ability to understand reality (or at least fake it) with a high degress of accuracy. It can't just be solved by throwing most processor power at it, you really need more advanced applications and more input data to make answers that make sense in the real world.

Eliza says- (4, Funny)

fatboy (6851) | more than 6 years ago | (#21631361)

So tell me about Turing Test (Sort of).

Re:Eliza says- (0)

Anonymous Coward | more than 6 years ago | (#21631749)

Why do you want me to tell you about Turing Test?

Re:Eliza says- (2, Interesting)

maxwell demon (590494) | more than 6 years ago | (#21631853)

That reminds me of a joke I've read quite some time ago (well, it was actually with images, but the text basically covers the funny part). It's a conversation:

"Nice weather."
"Yes, nice weather."
"It might rain this afternoon."
"Rain? You think so?"
"You're elizing again!"

Turing test extra credit (3, Funny)

Anonymous Coward | more than 6 years ago | (#21631367)

Convince the examiner he's a computer.

http://xkcd.com/329/ [xkcd.com]

Obligatory Futurama reference (5, Funny)

neonux (1000992) | more than 6 years ago | (#21631379)

Russian guy: You're no easily-laid woman, you're a Fembot!
Fembot: It's true. I disguised myself as a easily-laid woman so I could rule the Russians.
Russian guy: But why?
Fembot: Why? Why? I came here from a faraway planet. A planet ruled by a chauvinistic Manputer that was really a Manbot. Have you any idea how it feels to be a Fembot living in a Manbot's Manputer's world?

This test is very easy (4, Interesting)

file-exists-p (681756) | more than 6 years ago | (#21631389)

A decade ago I wrote a perl script for sirc that had 40 sentences and would just reply one picked at random (uniformly) every time it would get a private message. Hence it was not taking into account neither what was the message it just received to it (a la Eliza) nor what it had said before. It was not even waiting before replying, hence would type the respones in a tenth of a second.

It happened several times that people would talk with it for more than an hour. If I remember correctly the record was 1h45min ...

For the Turing test, the tester has a strong prior that the testee may be a computer. This is not the case here, and the prior for this to happen is so low that it's impossible for a layman to come with that explanation. What happens is that people think inconsistencies in the speech of their interlocutor is due to technical problems (sending message to the wrong person, lag, complexity of the program the person use, etc.)

Re:This test is very easy (0)

Anonymous Coward | more than 6 years ago | (#21632029)

For the Turing test, the tester has a strong prior that the testee may be a computer.

But the alternative prior here is an easily laid woman.

A computer capable of beating a Turing test is at least remotely plausible...

About Turing Test... (1)

dominious (1077089) | more than 6 years ago | (#21631401)

My uncle, who has no clue about computer games was usually playing backgammon with other players online. Well sometimes when the other player leaves, the game continues with the machine being your opponent. My uncle never noticed a thing and he got fooled by the computer several times. I remember once when he was quite angry with the "other player" and when I went over to see what's going on, I embarrassingly said to my uncle "this is the computer you are playing with it says on the left corner!

Now isn't that the point of a Turing Test? And how intelligent is a machine if it passes a Turing Test? I mean intelligence is not only that! It can't be...I think I (or all of us here) misunderstood the Turing Test somehow.

Re:About Turing Test... (1)

dvice_null (981029) | more than 6 years ago | (#21631589)

> Now isn't that the point of a Turing Test?

No, you can't pass the Turing test by fooling stupid people with a computer program. That problem is trivially solved. The goal on Turing test is to fool the judge, who knows that there is a 50% chance that you are a computer program. It is much harder.

Extra bonus is given if after the test, Judge no longer knows whether he/she is a human or not.

how women react to this? (1)

holywarrior21c (933929) | more than 6 years ago | (#21631423)

is this bot really "woman" or is it man-thought woman just enough to coax men into the conversation designed? assuming that it is programmed mostly by the men, this just caused people to be heterosexual by reacting to man created thoughts? can that become heterosexual? wouldn't it be like man actors acting woman in traditional japanese theater? wouldn't it be that it's logic and mechanic responsive behavior that man gets interested into?

Seems like (legal) vaporware (4, Informative)

ornil (33732) | more than 6 years ago | (#21631461)

The company that makes this has a website (cyberlover.ru [cyberlover.ru] - in Russian). They claim the software is to expedite picking up women by getting their phone number, pictures, etc. In any case, it's not available now and only has screenshots (same one as in the linked article). So who knows if anyone real has used it at all or it's all fake or a scam to sell this software.

They're chatbots?! (4, Funny)

El Yanqui (1111145) | more than 6 years ago | (#21631487)

I always assumed the easily laid women in chat rooms were just lonely, fat forty-year-old programmers indulging in their "curious" side. Fembots is a step up.

Emacs Girl? (2, Informative)

Locklin (1074657) | more than 6 years ago | (#21631723)

Wouldn't Emacs be great if it had a an Emacs Girlfriend AND an Emacs Psychologist? On the other hand, she would probably want to go to couples therapy :-(

Hello?

How do you do? What brings you to see me?

I am a hot russian girl

Why do you say you are a hot russian girl?

Are you looking for a girlfriend?

What do you think?

I'm a double D and love to have fun

fun what ?
What makes you believe and love to have what?

Aren't you a curious one...

Earlier you said and love to have what?

Can I have you're phone number??

Maybe your plans have something to do with this.

well duh! (2, Funny)

Anonymous Coward | more than 6 years ago | (#21631543)

It was in Russian! You could put up some totally random Cyrillic characters and I couldn't tell it from a real human either!

Humans failing Turing Test. (1)

Organic Brain Damage (863655) | more than 6 years ago | (#21631547)

We all fixate on the computer passing, but think of the billions of humans who could not pass a Turing test. Are they really human or just a collection of retrovirus' walking about pretending to be humans?

Turing test involves comparison (1)

HeroreV (869368) | more than 6 years ago | (#21632235)

In a Turing test, the judge knows he's communicating with a person and an AI, and he's trying to figure out which is which. Currently, even an idiot could easily prove he is more human than the smartest of AI.

If the judge was communicating with just one, there probably would be many cases where real people wouldn't pass. But when you're having a side-by-side comparison between humans and AI, it's currently very easy to tell which is which.

A simple answering machine can do the trick (0)

Anonymous Coward | more than 6 years ago | (#21631557)

This sort of sad trap is not an AI feat. Rather, it exploits weaknesses in vulnerable personality types.

Back in the days of tape-based answering machines, my sister, in a typical access of teenage girl bitchin, recorded a 5 minutes-long message entirely made of "yes", "hmm", and other evasive comments. The message was targeted at a particularly talkative friend of her, as a sort of lesson. Predictably, the friend (who became an enemy after that), engaged in a 5 minutes-long monologue before being greeted with a humiliating "I'm not here right now, please leave a message".

The true Turing test assumes that the human testing the machine is not subject to a particular type of self-centric neurosis. My personal belief is that no machine will ever pass the Turing test, for the simple reason, that, assuming a machine that has the ability to pass this test, then it would, as a side effect, have so much greater intellectual capabilities that it would not have the slightest interest in actually carrying a true conversation with a human being.

Open for discussion is the notion of whether machines that could pass the Turing test already but choose not to reveal it already exist on the net... That might be the real reason for the internet boom: machines wanted to talk with each other easily...

I 4 1... (1)

Dr.Altaica (200819) | more than 6 years ago | (#21631647)

I for one welcome our slutbot overlord masters. ...
Of course honey you can have my credit card number....

Sez Who (1)

JackSpratts (660957) | more than 6 years ago | (#21631653)

passes the turing test? wow. i don't know what's more newsworthy, that or the fact that the last word on "easily laid women" apparently now belongs to geeks.



- js.

Possible Good Thing?? (1)

wideBlueSkies (618979) | more than 6 years ago | (#21631689)

Well, the way I see it, this program is helping to drain the shallower portions of the gene pool.

The scammed (not so bright) 'victim' loses identity, credit, etc... and becomes far less desirable as a mate; therefore having less of chance to reproduce.

Microsoft Santa bot turns naughty online (0)

Anonymous Coward | more than 6 years ago | (#21631795)

Easily laid Russian woman chatbot, meet Microsoft naughty Santa chatbot [computerworld.com] .

Troll the phishers? (1)

chmod a+x mojo (965286) | more than 6 years ago | (#21631843)

Hmmmm I can't wait for one of these to start talking to the Slashtrolls... not only would the chat logs probably be hilarious but the poor phishers picture gallery would be full of goatse guy.

What about ... (1)

PPH (736903) | more than 6 years ago | (#21631923)

... humans that fail the Turing test?


I've seen quite a few posts/articles/etc. on various systems on assorted subjects where the originator of the thread submits some standard dogma about Jesus Christ/Muhammad/whomever and either never respond to subsequent queries or respond with some obscene vitriol about how questioning faith is the ultimate blasphemy.


Heck, I could knock one of these 'bots together in a few hours with Perl.

Things to look for. (0)

Anonymous Coward | more than 6 years ago | (#21631945)

Mostly the thing to look for is failure to reciprocate communication. Failure to develop an interactive verification loop that shows the girl is real, but not only real but is not someone just looking for money. Its not at all out of range to use an AI program assisted by real persons monitoring and dealing with little glitches personally. Also falling in love to quickly and asking for money are good indications of a scam. For Americans or non-russian speaking males, be careful of the language barrier excuse.

This is not to say there are not genuine russian girls looking for genuine relationships, but that the scams make it difficult to tell the difference.
Even we Americans have online dating and sex buddy opportunities that are real.

Sites like Bride.ru [bride.ru] may very well have genuinely interested in relationship girls, but even they warn you of other possible motives.
There are other sites that don't cost you as much just for communication access. I suspect the russian mafia has a hand in some of these scams as organized distribution of such programs
only make since in a business perspective.

Posting this anon. as I can speak with some experience. Seems an American based dating service either had 1 russian email gathering "girl" inserted as an email collection point which was then sent into the AI network from which numerious girls (ai instances with custom stories to tell) would then email you at the dating service address. By my responding and eventually recognizing a common pattern between over half a dozen girls I concluded it was an AI program. Or in writing to an American girl I forgot to uncheckmark the option of being promoted by their affiliates. I also realized real people were maintaining the operations of the systems.

What makes this scam work but not be Turing compatible is the emotional desire of the lonely male to overlook the obvious over the possibility of getting it on with a attractive female or beating out their loneliness. For some the Illusion is better than nothing. But to pass the turing test, emotions are suppressed. Needless to say, this AI program failed the turing test.

We have plenty artificially intelligent real people, it should be easy to fool them. Amazing we try so hard to make artificially intelligent machines. But then I suppose that would be a step in satisifying the real people that are artificial. We certainly can create the sexy bodies [realdoll.com] to put the AI into.

There are possible legit benefits to be had out of all of this. Medical, physical and psychological relationship issues could be dealt with productively with such technology.
Virtual reality applications are being used in the treatment of quite a number of physical and psychological matters. So although this current usage of pseudo relationship programs is to scam people out of money, this is certainly a stage that needs to be passed through and is likely to have motive enough to improve to a more convincing state. Probably a lot better than any legit funded effort as the feedback is certainly greater.

Tests (0, Flamebait)

Wowsers (1151731) | more than 6 years ago | (#21631959)

Never mind the Turing test, does it pass the GW.Bush intelligent conversation test?

the problem is the user (2, Insightful)

wikinerd (809585) | more than 6 years ago | (#21632015)

successfully emulates an easily-laid woman.

That's not a good test for AI. Research shows that men go crazy while talking with beautiful women. So, sexuality temporarily shuts down their intelligence. You can't test for AI while employing sexuality.

Turing (1)

ed.markovich (1118143) | more than 6 years ago | (#21632025)

As many have pointed out, this is not really a Turing test because the judge is not aware that they might be talking to a machine. I figured out that this was an important component a few years ago ;)

In 1999 or 2000, I took the Megahal [wikipedia.org] codebase, linked it up to one of the Linux console AIM clients, trainined it on 2+ years of my own AIM logs that I had at the time, and let it sign onto my screen name when I was not in the dorm.

If anyone IMd me at those times, they always thought they were talking to some really distracted and likely intoxicated version of me. After all, the bot 'sounded' like me since it was trained on my own text - it emulated my diction and usage and topics. Yet at the same time it was spewing complete nonsense.

Of course once someone was told that they might end up chatting with the machine, that person could very easily tell the difference between me and the EdBot

Chris Hansen (1)

SCHecklerX (229973) | more than 6 years ago | (#21632037)

Now chris hansen doesn't need to pay somebody to pretend to be a 13 year old girl in search of sex any more.

Just want we wanted (1)

dotancohen (1015143) | more than 6 years ago | (#21632119)

From http://what-is-what.com/what_is/computer.html [what-is-what.com] :

A computer is a device that accepts user input, processes it, and returns output.
Basically, that's what we want from our girlfriends/wifes anyway. Well, we might not want the output before she graduates from girlfriend to wife, but you get the idea.

Re:Just want we wanted (1)

maxwell demon (590494) | more than 6 years ago | (#21632173)

From http://what-is-what.com/what_is/computer.html [what-is-what.com] :

A computer is a device that accepts user input, processes it, and returns output.
My digestive system seems to be a computer as well. After all, it accepts my input (food), processes it and returns output (which I then flush down the toilet).

Jenny18 (3, Informative)

tiny69 (34486) | more than 6 years ago | (#21632125)

This was done years ago. Logs of victims are included.

http://virt.vgmix.com/jenny18/ [vgmix.com]

May be old news, perhaps even fake (1)

Animats (122034) | more than 6 years ago | (#21632259)

The CyberLover [cyberlover.ru] program site doesn't do much. None of the links work, including the one for sample chat logs. The site says "Copyright 2005-2006", so this has been up for a while. The site was trying to recruit "affiliates", for a program that sells for only $4.95. This looks like an idea that didn't work.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...