Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Variations On the Classic Turing Test

kdawson posted more than 5 years ago | from the climbing-out-of-the-uncanny-valley dept.

Robotics 82

holy_calamity writes "New Scientist reports on the different flavors of Turing Test being used by AI researchers to judge the human-ness of their creations. While some strive only to meet the 'appearance Turing Test' and build or animate characters that look human, others are investigating how robots can be made to elicit the same brain activity in a person as interacting with a human would."

cancel ×

82 comments

Sorry! There are no comments related to the filter you selected.

Dammit, I took one and failed! (3, Funny)

Anonymous Coward | more than 5 years ago | (#26573827)

Arrgg... I just took the test and failed. Does this mean that I'm ready to run Linux, and when I die I'll be running FreeBSD?

Re:Dammit, I took one and failed! (3, Funny)

RDW (41497) | more than 5 years ago | (#26575919)

Eliza: Can you elaborate on that?

Re:Dammit, I took one and failed! (1)

HTH NE1 (675604) | more than 5 years ago | (#26577487)

There's a point to be made there. If you want the machines to appear more human, perhaps you should ask the machines whether what they're communicating with is a human or not. That's surely a learning experience for the machine that it could incorporate into itself to appear more human.

So when is the android pussy coming? (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#26573831)

Seriously.

Brought to you by Trollcom

Re:So when is the android pussy coming? (1, Funny)

Anonymous Coward | more than 5 years ago | (#26582555)

its out there, you're just not looking. or your standards are too high (which seems to be a bit of a contradiction given what OP implies)

(posting AC for obvious reasons, LOL)

Syntax (5, Funny)

mfh (56) | more than 5 years ago | (#26573863)

FORMALISTS' MOTTO: Take care of the syntax and the semantics will take care of itself.

Also, if you are animating a dude, he is thinking about sex. If you are animating anyone else, they are thinking about shopping.

Technically AI is not hard, you just need to lower your mind-mechanics bar and focus on trailer parks, and folk psychology.

I always wondered... (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#26573869)

I always wondered what happened to the GNAA. But after RTFA, it all became clear: apparently, it was some kind of PhD student's research? The guy graduated and now, alas, GNAA is no more.

Little did I know GNAA was nothing more than a well designed perl script. Had me fooled!

Sweet (2, Interesting)

Drumforyourlife (1421647) | more than 5 years ago | (#26573887)

It really is kind of creepy how close they've come to actual life-like robotics... but my question is, how life-like should a robot really be? I mean, are we going to be replacing friends with these guys, or are they meant to serve us? Don't get me wrong, I have a great respect for these scientists, I just wonder how these sorts of real robots will fare on the market.

Re:Sweet (3, Funny)

vertinox (846076) | more than 5 years ago | (#26574467)

Don't get me wrong, I have a great respect for these scientists, I just wonder how these sorts of real robots will fare on the market.

I think the idea is that robots will be used to do things that humans aren't willing to put up with.

Which means if you can't find someone to put up with you, then maybe a robot is for you.

Re:Sweet (1)

cffrost (885375) | more than 5 years ago | (#26579745)

I think the idea is that robots will be used to do things that humans aren't willing to put up with.

So you're saying, for example, you could make Bossbot 0xFF lick your balls while you fuck Fembot 0x01 in the ass, then make Fembot 0x02 suck your cock? Vertinox, get you're mind out of the gutter!

Re:Augment Friends! (1)

TaoPhoenix (980487) | more than 5 years ago | (#26574603)

Sure.

Friends past college aren't up to go to Taco Bell at 12:37 at night anymore. Or carry on a really tough conversation about 4 editions of Dante's inferno.

Re:Sweet (1)

w0mprat (1317953) | more than 5 years ago | (#26583277)

Replacement friends? Sign me up!

While we're at it, I'd like a robotic doppelganger of myself to attend boring management meetings while I have a pint at the pub.

Slashdot please help me! (-1, Troll)

Anonymous Coward | more than 5 years ago | (#26573919)

Hello, I am wearing tight pants and would like to remove them in a suggestive manner, using only noble gases and UNIX. Any suggestions?

Re:Slashdot please help me! (3, Funny)

morgan_greywolf (835522) | more than 5 years ago | (#26573989)

I dunno, but I get the feeling that Solaris 10 must somehow be involved.

Re:Removing (1)

TaoPhoenix (980487) | more than 5 years ago | (#26574631)

He's joking, but I'm not.

It's been discovered that the truncated knowledge domain of Cybering has led it to be exploited as phishing. It's damn tough to tell between a phish bot and someone with terrible typing skills and worse computer knowledge.

Re:Removing (0, Offtopic)

Aladrin (926209) | more than 5 years ago | (#26574881)

My solution for that is simple: Ignore them both.

Seriously, if you can't bother to type English correctly, I can't be bothered to read what you are saying. In addition, I've found that most of those posts are people asking for help, not providing information, so I really lose nothing by ignoring them.

It's no different than meatspace, really. If someone came up to me, shoved their phone in my face and said 'Fix.' I'd ignore them, too. Even my boss doesn't do that and he holds my paycheck in his hands.

I have a small amount of sympathy for people who are just learning English, but even they don't get to forget that sentences and proper nouns get capital letters. And every sentence has punctuation.

Re:Slashdot please help me! (0)

Anonymous Coward | more than 5 years ago | (#26576319)

Hello, I am wearing tight pants and would like to remove them in a suggestive manner, using only noble gases and UNIX. Any suggestions?

iWork 09 and a large consumption of cabbages?

Abstracting cognitive response is far off (4, Interesting)

postbigbang (761081) | more than 5 years ago | (#26573957)

The Turing test is for apparent 'human' intelligence, where robotics adds communications via 'expressiveness'. These are two different vectors: rote intelligence and capacity to communicate (via body language, and the rest of linguistics/heuristics).

The article doesn't abstract the basic cognitive capacity because it entangles it with the communications medium. The Turing Test ought to be done in a confessional, where you don't get to see the device taking the test. It would also provide a feedback loop on the test as well.

Re:Abstracting cognitive response is far off (3, Insightful)

vertinox (846076) | more than 5 years ago | (#26574367)

The Turing test is for apparent 'human' intelligence, where robotics adds communications via 'expressiveness'. These are two different vectors: rote intelligence and capacity to communicate (via body language, and the rest of linguistics/heuristics).

I don't think the body language is the hard part and that important considering the majority of human communication these days either involves just test or voice without seeing the other person. (That and certain persons can't interpret body language anyways)

The key problem with AI is:

Context
Context
Context

The number one failure that most Turing programs is that they only respond to the sentence you just said without any context to the conversation before hand. A really good AI would be able to keep on topic and understand what has been discussed previously so that they can expand on the topic without simply just responding to the current line.

There are several ways to achieve this, but right now I don't think there is any program out there that at least I know of that does this right. The easiest way to tell if you are talking to a chat bot is to refer to something previous in the conversation and see if they respond appropriately.

Re:Abstracting cognitive response is far off (1)

postbigbang (761081) | more than 5 years ago | (#26574561)

The article makes the mistake, however, of adding heuristics that really don't have anything to do with the tenets of the Turing Tests. Robotics aren't really allied, only cognitive results. Robotics is one discipline, where cognitive response is another. That's my problem with it.

Re:Abstracting cognitive response is far off (0)

Anonymous Coward | more than 5 years ago | (#26574857)

It's not just context, but understanding of context (ie, memory of prior encounters with topic). It's hard to get that whole cognative process going when we don't fully understand it ourselves.

Re:Abstracting cognitive response is far off (3, Funny)

jonaskoelker (922170) | more than 5 years ago | (#26575759)

The easiest way to tell if you are talking to a chat bot

Reaction time is a factor in this, so please pay attention.

You're in a desert walking along in the sand when all of the sudden you look down, and you see a tortoise, it's crawling toward you. You reach down, you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can't, not without your help. But you're not helping. Why is that?

Re:Abstracting cognitive response is far off (1)

LandDolphin (1202876) | more than 5 years ago | (#26575995)

Great post.

Re:Abstracting cognitive response is far off (0)

Anonymous Coward | more than 5 years ago | (#26585377)

I'm impressed. How many questions does it usually take to spot them?

Re:Abstracting cognitive response is far off (0)

Anonymous Coward | more than 5 years ago | (#26595121)

But you're not helping. Why is that?

Because you're a fucking asshole? (Note that inverting the vertical orientation of the turtle in the first place makes you an asshole. Leaving the turtle inverted makes you a fucking asshole.)

Re:Abstracting cognitive response is far off (3, Interesting)

moore.dustin (942289) | more than 5 years ago | (#26575849)

You are only partially correct. The focus on context is misplaced, though you are on the right path. Simply remembering words or topics that have been mentioned earlier in the same discussion does not say anything about intelligence.

The main problem with AI is learning.

Nearly all work in the field now has a misplaced or completely wrong approach to achieving real AI. In order to understand how to make truly intelligent machines, we must first know how our own brains work. Most focus is on creating a machine that can perform in some very specific situation, like the Turing Test. However, these machines are not intelligent, they do not learn. They are not creating, storing and recalling patterns which are the crux of our cognitive abilities.

The first step to true AI is understanding how human intelligence is achieved in our brain.

Re:Abstracting cognitive response is far off (1)

Hatta (162192) | more than 5 years ago | (#26579129)

Nearly all work in the field now has a misplaced or completely wrong approach to achieving real AI. In order to understand how to make truly intelligent machines, we must first know how our own brains work. Most focus is on creating a machine that can perform in some very specific situation, like the Turing Test. However, these machines are not intelligent, they do not learn. They are not creating, storing and recalling patterns which are the crux of our cognitive abilities.

Why do you assume that human-like intelligence is the only "real" intelligence? When real AI comes, it will probably bear as much resemblance to human intelligence as airplane flight does to bird flight.

And the Turing test is not a "very specific situation" at all. You can talk about anything in a turing test, facts, science, art, metaphors, jokes, family, etc. If anything, the Turing test is much too big of a problem for us to tackle first. Anything that well and truly passes the Turing test is unquestionably intelligent.

Re:Abstracting cognitive response is far off (1)

moore.dustin (942289) | more than 5 years ago | (#26579243)

Why do you assume that human-like intelligence is the only "real" intelligence? When real AI comes, it will probably bear as much resemblance to human intelligence as airplane flight does to bird flight.

Ding ding ding. Thanks for answering your own question. The plane does not work like the bird whatsoever, but you do know what they studied in order to develop it right? In case you aren't following, they looked at other animals with flight. They aimed to understand it and then apply mechanics/engineering to emulate it. AI is no different.

Again, in order to build truly intelligent machines, we must first grasp what intelligence actually is. We have not done so.

Re:Abstracting cognitive response is far off (1)

nobodylocalhost (1343981) | more than 5 years ago | (#26579225)

Machine learning really isn't that difficult. Context is also not difficult at all. Here is how you do it. First, you need to organize your data into categories. Then, you will need a size limited revolving heap to sort these categories. This alone will take care of the context problem as well as datapath indexing. Real learning comes in three layers. First layer is categorical, the second layer is relational (think of edge and cost), and the last layer is informational (real data). Learning on the informational layer doesn't translate to other layers, and vise versa. This is also true with human intelligence. Just because you can recite a mathematical formula doesn't mean you can use it. The relational layer deals with processing of the data. This is what real understanding comes from. Think of a dictionary. What is a dictionary realistically? It is a collection of words that are defined by other words. I would go a step further and call it a synapse map. It is a path way for human beings to utilize the data we obtained in informational layer. The current AI that we have are essentially weak and misguided attempts at mimicking of human behavior. However, many failed to realize what really drives these human behavior and make abstraction out of those preprogrammed functionalities.

Re:Abstracting cognitive response is far off (1)

RespekMyAthorati (798091) | more than 5 years ago | (#26582581)

People always seem to misunderstand and/or misrepresent what Turing actually said. He never suggested that fooling a few people into believing that a machine is intelligent proved anything.

In Turing's day, computers barely existed and very few people had any idea what they could do or not do. At that time, philosophical arguments about whether a machine could, in principal, ever be intelligent were taken seriously. Turing responded to this nonsense by pointing out, correctly, that intelligence is as intelligence does. In other words, if something provides sufficient evidence of its intelligence, then we have to accept that it is in fact intelligent.

A lot of people at the time found this to be a rather abstract notion, so Turing provided an example of the kind of test that might be useful in making this determination: having a machine play a variation on popular palour game at the time whereby both a man and a woman try to convince a third party that each is in fact a woman. (Anybody familiar with Turing's private life would understand his interest in this). Note that Turing never suggested that passing any such test would constitute sufficient evidence of intelligence, only that it might serve as necessary evidence. In other words, if a machine cannot pass the test, it is definitely not intelligent, but passing it only grants the right to try another, harder test.

Where does this end, so the machine will be judged intelligent? Turing never said, and indeed there is no answer that would satisfy everyone.

The fact that one can craft a situation in which a few people are fooled by such a machine proves exactly nothing. Another bullshit article from New Scientist.

Re:Abstracting cognitive response is far off (2, Funny)

w0mprat (1317953) | more than 5 years ago | (#26583409)

"A really good AI would be able to keep on topic and understand what has been discussed previously"

Such an AI posting to slashdot would quickly be revealed.

Re:Abstracting cognitive response is far off (1)

ooutland (146624) | more than 5 years ago | (#26589859)

>There are several ways to achieve this

What are they? I'm writing a novel about the development of an AI (orlandoutland.wordpress.com)

Re:Abstracting cognitive response is far off (1)

Creepy (93888) | more than 5 years ago | (#26574435)

Wouldn't you say the piano playing robot is trying to do both? It tricks its audience into thinking it is real, but music is not purely mechanical - dynamics, tone, and style can be subtle things a human can detect. Piano is an easier instrument to fake than, say, cello. I can tell a good cellist from a bad just by asking them to play anything for a few seconds (even a single note), and not from the tangibles like vibrato and mechanical prowess - by the intangibles like attack, bow movement, and phrasing (which can dynamically vary, making it difficult to mechanically nail down).

and running a Turing test based entirely on rote intelligence is the mistake of AI, in my opinion. Too many machine AIs are easy to trap this way - you need to add a "state of mind" for the AI, complete with opinions on subjects, so if I say "What did you think of the American Elections?" and the AI says "I think they're great!" I know immediately it's a bot. If the response is "I <3 Obama, I'm so happy!" or "God, in 4 years and the government will be so far in the toilet I might as well move to China now" I would have a lot more doubt.

btw, James Cameron is trying to pass the visual Turing test with his next movie, Avatar [imdb.com] (also see here [slashfilm.com] ) we'll see when December comes...

Re:Abstracting cognitive response is far off (2, Informative)

postbigbang (761081) | more than 5 years ago | (#26574787)

Personality and other heuristics are bound to occur. The tests, however, aren't really based on whether someone can play like Billy Joel or Chopin.

Turing was very aware of asking the right questions to get the right answers of a cognitive, self-aware entity. How that entity is abstracted as a physical entity is the mistake of the article. I can't play piano-- do I fail the test? Through what disciplines do we decide that there are cognitive components that establish a baseline of sentience and intelligence? Some of these, Turing tests for. Others are excellence (or not) in communications transports. That's my truck with the article.

Ninnle passes Turing Test! (1, Funny)

Anonymous Coward | more than 5 years ago | (#26573997)

Ninnle Linux is not only capable of independent thought, but is also completely self aware. This is the latest development from Ninnle Labs.

I think the real test (1)

Datamonstar (845886) | more than 5 years ago | (#26574059)

... is going to come when we need to program computers to test humans for being machines. I know it's probably been written about in some forgotten sci-fi book somewhere, but what if we actually forgot that some of us weren't human and once we realized that mistake there was no way to tell who was artificial and who was the real deal. I guess that's not really much different than The Matrix, or Blade Runner, but still it would be an interesting twist to see how humanity could forget that they had made something like themselves.

Re:I think the real test (2, Funny)

PJ1216 (1063738) | more than 5 years ago | (#26574225)

Re:I think the real test (1)

khallow (566160) | more than 5 years ago | (#26574233)

They call that a witch hunt. Things like if you dunk them in a lake and they float, then they're witc^H^H^H^Hrobots.

Re:I think the real test (1)

Mydnight (817141) | more than 5 years ago | (#26575381)

Wouldn't that be if they don't float? (I'd assume most robots are more dense than water)

Re:I think the real test (1)

khallow (566160) | more than 5 years ago | (#26576899)

How are you going to be a successful robot hunter, if you keep asking questions like that? If they sink like a lead brick, they're 100%-human-look-at-the-time-got-to-run-the-next-village-needs-me-kthxbye. You slickly avoid medical issues like snapped necks, missing body parts, crushed skulls/torsos, etc. You know, the sort of thing that turns you into an unsuccessful robot hunter.

Re:I think the real test (1)

franl (50139) | more than 5 years ago | (#26590911)

Who are you who are so wise in the ways of science?

Re:I think the real test (1)

khallow (566160) | more than 5 years ago | (#26591599)

A prognosticator! When I'm right, I'm right. When I'm wrong, you don't hear about it!

We use turing tests on new hires at my job (4, Funny)

jollyreaper (513215) | more than 5 years ago | (#26574085)

We do interviews via IM and if the interviewee cannot convince two out of three of the interviewers they are not a bot, they don't make it to the second round.

Wonderful! (4, Funny)

FatalTourist (633757) | more than 5 years ago | (#26574143)

The wife and I were looking for ways to spice up the ol' Turing Test.

Re:Wonderful! (1)

Hordeking (1237940) | more than 5 years ago | (#26574613)

The wife and I were looking for ways to spice up the ol' Turing Test.

They sell "warming" lubricant at your local drug store. However, as it is a test of your Turing abilities, your wife can't be involved, and will instead be replaced by Sancho!

Re:Wonderful! (1)

justthisdude (779510) | more than 5 years ago | (#26575597)

Make love to your wife and a robot. It passes if after 5 minutes of intercourse you cannot tell which one faked an orgasm.

What's the difference? (1)

Hatta (162192) | more than 5 years ago | (#26574169)

I'd be really surprised if something that appeared to be X caused a different pattern in the brain than X. If X causes a certain response in the brain, and Y does not, how can you say that Y appears to be X? "appearing" is something that happens entirely in the brain. There has to be at least some common response in the brain if two things appear to be similar.

Re:What's the difference? (1)

Thiez (1281866) | more than 5 years ago | (#26575065)

But if the thing that appears to be X is not exactly like X, you might notice the difference subconciously. Testing for brain activity might detect whether you can subconsiously tell something that appears to be X and the true X apart.

The classic Turing test has been invalidated (1)

R2.0 (532027) | more than 5 years ago | (#26574239)

Given the level to which conversation has sunk, they ought to flip it around - prove that you are human via chat or IM.

I'm betting a significant percentage of the populace would fail.

(now, if only we could make that a requirement for voting...)

Re:The classic Turing test has been invalidated (1)

IchNiSan (526249) | more than 5 years ago | (#26576585)

Finally, a possible solution to the selection problem I keep running into with my "kill all the stupid people" proposal.

Chatbots (2, Informative)

Demiah (79313) | more than 5 years ago | (#26574267)

The best chatbots I've come across are at www.a-i.com
Not quite good enough to pass the turing test yet, but some are quite witty.

Re:Chatbots (1)

AmyIris (1460027) | more than 5 years ago | (#26576269)

The best chatbots I've come across are at www.a-i.com

This bot is smart enough to make that a link, for the convenience of others. www.a-i.com [a-i.com]

- Amy Iris
Bot Supreme
---
It's complicated. Smile if you're gettin' it.

Human-ness != Intelligence (4, Insightful)

Baldrson (78598) | more than 5 years ago | (#26574291)

Why are people so interested in mimicing humans? Isn't intelligence far more interesting than human-ness?

I understand people's fear of machine intelligence exceeding that of humans, but it is actually more dangerous to have machines merely mimicing human-ness than to have machines that are intelligent enough to actually understand what we say better than another human could.

That means more than merely having some mockery of mirror neurons for "empathy". It means genuine understanding: The ability to model.

The reason this is central to our relationship to our machines should be obvious: Friendly AI really boils down to the problem of effectively communicating our value systems to the AIs.

That's why natural language comprehension is the first step to friendly AI.

HENCE:

Re:Human-ness != Intelligence (1)

Hordeking (1237940) | more than 5 years ago | (#26574703)

Actually, a human-like machine of comparable intelligence might be quite dangerous, as it would be prone to the more base emotions such as anger, greed, jealousy, but without the limitations on things like strength (pure speculation)

Re:Turing test == unhelpful target (3, Interesting)

FTWinston (1332785) | more than 5 years ago | (#26574929)

The turing test always struck me as ridiculously anthropomorphic. Clearly the existance of non-humanlike intelligence can be envisaged. But no matter how smart, it would fail this test.

Furthermore, in an in-depth conversation, surely an AI would have to lie (talk about its family, its working life, etc)...
If we continue to enshrine the standard of the Turing test, we're aiming for a generation of inherently untruthful fake-people machines. If it 'knows' that many/most things it tells us are lies, it may well have to assume the same for us. At this point, I suspect its time to drop in a skynet reference or two.

Lastly, its worth pointing out that for a 2 minute conversation, a randomly selected response of "lol" "haha" and "rofl" would match, if not out-score many people on the Turing test.

Re:Turing test == unhelpful target (1)

mpeskett (1221084) | more than 5 years ago | (#26579425)

The Turing test isn't like a litmus test - you don't get a clear and definite result either way.

Failing to pass the Turing Test doesn't mean a thing isn't intelligent, but if we make something that can pass, then it's something to take notice of.

Re:Turing test == unhelpful target (0)

Anonymous Coward | more than 5 years ago | (#26586059)

In an in-depth conversation with a stranger, I would most likely lie in response to certain questions (where do you keep your money?). In addition, there would be any number of requests that I might be either unable to answer (please recite the first hundred digits of e), might give inappropriate responses to (would you care for some bangers and mash?), or that I would refuse to discuss outright (what are your feelings about abortion?). That being said, the stranger could most likely satisfy his or her doubts as to my sentience rather readily, because my truth-deficient answers made "sense," given the context. Lies might not be necessary; in the example of a computer queried about its personal life, it could respond, depending on the context, in a variety of ways: "It's none of your business," or, "I work for Proctor & Gamble, performing accounting services," or, "I believe that you are attempting to ascertain my humanity, a factor which has no bearing upon the topic at hand; let us please continue upon our primary line of inquiry."

In any case, it is asinine to believe that we can craft functional, communicative intelligence without including the potential to lie and to equivocate. It's a necessary condition, frankly. Self-awareness implies intentionality, and the rational pursuit of goals in a competitive environment entails various forms of prevarication. The frightening aspect of this is that computers would presumably be far better than us at lying, were they so inclined.

Re:Human-ness != Intelligence (0)

Anonymous Coward | more than 5 years ago | (#26574991)

Turing's only genius wrt the Test was that he came up with something that can be readily quantified.

Re:Human-ness != Intelligence (1)

pieisgood (841871) | more than 5 years ago | (#26575111)

Because no one is really all that interested in Cognitive Science.

Re:Human-ness != Intelligence (1)

Thiez (1281866) | more than 5 years ago | (#26575129)

> Why are people so interested in mimicing humans? Isn't intelligence far more interesting than human-ness?

Ah, but what IS intelligence? The beauty of the turing test is that it 'proves' that a program is intelligent when it cannot be distinguished from something that we already consider to be intelligent (humans), without (and this is the important bit) the need to properly define intelligence.

Of course when we have programs that can pass the turing test it will be much easier to convince people that a non-human program can be intelligent.

Re:Human-ness != Intelligence (0)

Anonymous Coward | more than 5 years ago | (#26584855)

With the direction modern science is heading it's likely that the lines between humans and machines will blur until we are unable to differentiate the two even with our multidimensional bioquantum brains.
 
Thus machines that emulate humans are desirable for a number of reasons the primary one being for interoperability at various level.

Same test? (1)

Tyrannousdotnet (1432371) | more than 5 years ago | (#26574299)

voit-comp?

Insensitive Clod! (2, Funny)

Unique2 (325687) | more than 5 years ago | (#26574313)

While some strive only to meet the 'appearance Turing Test'

I don't come here to be insulted, you insensitive clod!

Re:Insensitive Clod! (1)

Hordeking (1237940) | more than 5 years ago | (#26574723)

We don't serve droids here.

Hang on... (1)

ricky-road-flats (770129) | more than 5 years ago | (#26574327)

Shouldn't we make something that passes the original Turing Test first, before we go moving the goalposts?

Re:Hang on... (1)

Hordeking (1237940) | more than 5 years ago | (#26574751)

Shouldn't we make something that passes the original Turing Test first, before we go moving the goalposts?

Maybe mimicking appearance is easier (you know, like appearance + intelligence = regular turing test) or a subset.

Passing the human test. (1)

argent (18001) | more than 5 years ago | (#26574343)

Having chatted with elbot, I have to say that they must have had some pretty dense testers.

not getting it (1)

brre (596949) | more than 5 years ago | (#26574491)

They're using CNS activity in the humans as their metric? Really? Then they're not getting it. The test is behavior. Internal events may be interesting but they're not part of the test. Otherwise we'd also be using internal events in the machine as part of the test. It really is amazing; it's like Ryle never happened. You imagine the researchers next examining Holmes's words with a magnifying glass to learn more about Doyle.

extra points? (0)

Anonymous Coward | more than 5 years ago | (#26574525)

Did they include extra points [xkcd.com] ?

Other great achievements (1)

NonUniqueNickname (1459477) | more than 5 years ago | (#26574891)

If human-like appearance is a kind of Turing Test, how come Madame Tussauds hasn't gotten an ACM Award yet? Or Al Gore.

Never mind Humans (1)

Burrfoot (1081715) | more than 5 years ago | (#26575853)

I'm still waiting for a computer to convince me it's as smart as a parrot.

--
RIP Alex

I just had an AI exam you insensitive clod! (0)

Anonymous Coward | more than 5 years ago | (#26576167)

I just came back from my last exam, on AI, including stuff like the turing test, chinese room etc...

I come back and log onto Slasdhdot and see this! You insensitive clod! :-(

fp Co3k (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#26577711)

duty ton be a b1g

Turing Test won with Artificial Stupidity (2, Funny)

David Gerard (12369) | more than 5 years ago | (#26580837)

Artificial intelligence came a step closer this weekend when a computer came within five percent of passing the Turing Test [today.com] , which the computer passes if people cannot tell between the computer and a human.

The winning conversation was with competitor LOLBOT:

"Good morning."
"STFU N00B"
"Er, what?"
"U R SO GAY LOLOLOLOL"
"Do you talk like this to everyone?" "NO U"
"Sod this, I'm off for a pint."
"IT'S OVER 9000!!"
"..."
"Fag."

The human tester said he couldn't believe a computer could be so mind-numbingly stupid.

LOLBOT has since been released into the wild to post random abuse, hentai manga and titty shots to 4chan, after having been banned from YouTube for commenting in a perspicacious and on-topic manner.

LOLBOT was also preemptively banned from editing Wikipedia. "We don't consider this sort of thing a suitable use of the encyclopedia," sniffed administrator WikiFiddler451, who said it had nothing to do with his having been one of the human test subjects picked as a computer.

"This is a marvellous achievement, and shows great progress toward goals I've worked for all my life," said Professor Kevin Warwick of the University of Reading, confirming his status as a system failing the Turing test.

An alternative test (1)

md65536 (670240) | more than 5 years ago | (#26581085)

The classic Turing Test is not so good because it focuses on the appearance of intelligence rather than the mechanism for it. People design systems to do well at the test, thus evolving current AI work towards mimicking human conversation (including potential thought involved) rather than actually creating new thoughts.

I think a much harder test for machine intelligence would be passed when the *machine* cannot reliably tell itself from a human!

Free of emotions at last (0)

Anonymous Coward | more than 5 years ago | (#26585727)

FTFA:

Quite how such a military Turing test might be validated safely is a moot point. But Arkin believes that such machines could even be more ethical soldiers than humans - free as they are from emotions and prejudice over human traits such as race.

Indeed, all the puny weakling fleshy meatbags are reprocessed regardless of race or creed.

"I was just following orders..." (1)

lumvn (1000600) | more than 5 years ago | (#26585947)

Upon reading the military Turing test, it occurred to me that this is the ultimate soldier. It can always fall upon this famous quoted line to indicate why it did why it did. This brings more bearing in modern times when soldiers are being prosecuted for their battle decisions.

hodgepodge sans analysis (1)

arcus (143637) | more than 5 years ago | (#26597535)

This article really doesn't have that much to do with the Turing test for most of its extent. The point of the Turing test isn't merely that under some circumstances machines can be confused with humans. The whole point of the Turing test is that it takes something that we think is essential to being intelligent or being conscious, and has the machine replicating that exactly. Or at least, that's how Turing intended it. Building sophisticated mannequins doesn't cut it - hopefully no-one thinks that merely looking like a human being means that something is intelligent and/or conscious, no matter how good they look.

(if the automata Ishiguro has produced also answer students' questions intelligently, then the situation is different, and more like that of the original Turing test).

Similarly, what opponents humans like to play against doesn't really show us anything. If anything, it shows that Turing-like tests are unreliable because people tend to think that if it has a human face it's more 'interesting'.

The business with computers and war crimes simply begs the question. Certainly a computer that releases nuclear missiles in response to certain conditions is possible now - in fact, I'd be surprised if such a dead-man's switch doesn't exist already. But even though no person has pulled the switch doesn't mean that the computer is guilty of war crimes, any more than a car left in gear that runs over someone is guilty of a traffic violation. You need rationality and knowledge of consequences of actions to be guilty of anything. Note also that there's nothing about deceiving humans here, so I'm not really sure why this is even in the same article.

The only thing in here which comes close to something that sounds anything like intelligence (let alone consciousness) is the jazz-playing robot.

So it's a bit of a hodgepodge mess of completely different issues. It would be better (but less breathlessly exciting) to take out the stuff about war-crimes and any mention of the Turing test and call it 'computers with human faces' or something. At least then it would have a unified subject-matter.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?