Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics Software

Variations On the Classic Turing Test 82

holy_calamity writes "New Scientist reports on the different flavors of Turing Test being used by AI researchers to judge the human-ness of their creations. While some strive only to meet the 'appearance Turing Test' and build or animate characters that look human, others are investigating how robots can be made to elicit the same brain activity in a person as interacting with a human would."
This discussion has been archived. No new comments can be posted.

Variations On the Classic Turing Test

Comments Filter:
  • by Anonymous Coward on Friday January 23, 2009 @10:02AM (#26573827)

    Arrgg... I just took the test and failed. Does this mean that I'm ready to run Linux, and when I die I'll be running FreeBSD?

  • Syntax (Score:5, Funny)

    by mfh ( 56 ) on Friday January 23, 2009 @10:07AM (#26573863) Homepage Journal

    FORMALISTS' MOTTO: Take care of the syntax and the semantics will take care of itself.

    Also, if you are animating a dude, he is thinking about sex. If you are animating anyone else, they are thinking about shopping.

    Technically AI is not hard, you just need to lower your mind-mechanics bar and focus on trailer parks, and folk psychology.

  • Sweet (Score:1, Interesting)

    It really is kind of creepy how close they've come to actual life-like robotics... but my question is, how life-like should a robot really be? I mean, are we going to be replacing friends with these guys, or are they meant to serve us? Don't get me wrong, I have a great respect for these scientists, I just wonder how these sorts of real robots will fare on the market.
    • Re:Sweet (Score:4, Funny)

      by vertinox ( 846076 ) on Friday January 23, 2009 @10:58AM (#26574467)

      Don't get me wrong, I have a great respect for these scientists, I just wonder how these sorts of real robots will fare on the market.

      I think the idea is that robots will be used to do things that humans aren't willing to put up with.

      Which means if you can't find someone to put up with you, then maybe a robot is for you.

      • by cffrost ( 885375 )

        I think the idea is that robots will be used to do things that humans aren't willing to put up with.

        So you're saying, for example, you could make Bossbot 0xFF lick your balls while you fuck Fembot 0x01 in the ass, then make Fembot 0x02 suck your cock? Vertinox, get you're mind out of the gutter!

    • Sure.

      Friends past college aren't up to go to Taco Bell at 12:37 at night anymore. Or carry on a really tough conversation about 4 editions of Dante's inferno.

    • Replacement friends? Sign me up!

      While we're at it, I'd like a robotic doppelganger of myself to attend boring management meetings while I have a pint at the pub.
  • by postbigbang ( 761081 ) on Friday January 23, 2009 @10:14AM (#26573957)

    The Turing test is for apparent 'human' intelligence, where robotics adds communications via 'expressiveness'. These are two different vectors: rote intelligence and capacity to communicate (via body language, and the rest of linguistics/heuristics).

    The article doesn't abstract the basic cognitive capacity because it entangles it with the communications medium. The Turing Test ought to be done in a confessional, where you don't get to see the device taking the test. It would also provide a feedback loop on the test as well.

    • by vertinox ( 846076 ) on Friday January 23, 2009 @10:49AM (#26574367)

      The Turing test is for apparent 'human' intelligence, where robotics adds communications via 'expressiveness'. These are two different vectors: rote intelligence and capacity to communicate (via body language, and the rest of linguistics/heuristics).

      I don't think the body language is the hard part and that important considering the majority of human communication these days either involves just test or voice without seeing the other person. (That and certain persons can't interpret body language anyways)

      The key problem with AI is:

      Context
      Context
      Context

      The number one failure that most Turing programs is that they only respond to the sentence you just said without any context to the conversation before hand. A really good AI would be able to keep on topic and understand what has been discussed previously so that they can expand on the topic without simply just responding to the current line.

      There are several ways to achieve this, but right now I don't think there is any program out there that at least I know of that does this right. The easiest way to tell if you are talking to a chat bot is to refer to something previous in the conversation and see if they respond appropriately.

      • The article makes the mistake, however, of adding heuristics that really don't have anything to do with the tenets of the Turing Tests. Robotics aren't really allied, only cognitive results. Robotics is one discipline, where cognitive response is another. That's my problem with it.

      • The easiest way to tell if you are talking to a chat bot

        Reaction time is a factor in this, so please pay attention.

        You're in a desert walking along in the sand when all of the sudden you look down, and you see a tortoise, it's crawling toward you. You reach down, you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can't, not without your help. But you're not helping. Why is that?

      • by moore.dustin ( 942289 ) on Friday January 23, 2009 @12:37PM (#26575849) Homepage
        You are only partially correct. The focus on context is misplaced, though you are on the right path. Simply remembering words or topics that have been mentioned earlier in the same discussion does not say anything about intelligence.

        The main problem with AI is learning.

        Nearly all work in the field now has a misplaced or completely wrong approach to achieving real AI. In order to understand how to make truly intelligent machines, we must first know how our own brains work. Most focus is on creating a machine that can perform in some very specific situation, like the Turing Test. However, these machines are not intelligent, they do not learn. They are not creating, storing and recalling patterns which are the crux of our cognitive abilities.

        The first step to true AI is understanding how human intelligence is achieved in our brain.
        • by Hatta ( 162192 )

          Nearly all work in the field now has a misplaced or completely wrong approach to achieving real AI. In order to understand how to make truly intelligent machines, we must first know how our own brains work. Most focus is on creating a machine that can perform in some very specific situation, like the Turing Test. However, these machines are not intelligent, they do not learn. They are not creating, storing and recalling patterns which are the crux of our cognitive abilities.

          Why do you assume that human-like

          • Why do you assume that human-like intelligence is the only "real" intelligence? When real AI comes, it will probably bear as much resemblance to human intelligence as airplane flight does to bird flight.

            Ding ding ding. Thanks for answering your own question. The plane does not work like the bird whatsoever, but you do know what they studied in order to develop it right? In case you aren't following, they looked at other animals with flight. They aimed to understand it and then apply mechanics/engineering to emulate it. AI is no different.

            Again, in order to build truly intelligent machines, we must first grasp what intelligence actually is. We have not done so.

        • Machine learning really isn't that difficult. Context is also not difficult at all. Here is how you do it. First, you need to organize your data into categories. Then, you will need a size limited revolving heap to sort these categories. This alone will take care of the context problem as well as datapath indexing. Real learning comes in three layers. First layer is categorical, the second layer is relational (think of edge and cost), and the last layer is informational (real data). Learning on the informat

        • People always seem to misunderstand and/or misrepresent what Turing actually said. He never suggested that fooling a few people into believing that a machine is intelligent proved anything.

          In Turing's day, computers barely existed and very few people had any idea what they could do or not do. At that time, philosophical arguments about whether a machine could, in principal, ever be intelligent were taken seriously. Turing responded to this nonsense by pointing out, correctly, that intelligence is as intel
      • Re: (Score:3, Funny)

        by w0mprat ( 1317953 )

        "A really good AI would be able to keep on topic and understand what has been discussed previously"

        Such an AI posting to slashdot would quickly be revealed.

      • >There are several ways to achieve this

        What are they? I'm writing a novel about the development of an AI (orlandoutland.wordpress.com)

    • by Creepy ( 93888 )

      Wouldn't you say the piano playing robot is trying to do both? It tricks its audience into thinking it is real, but music is not purely mechanical - dynamics, tone, and style can be subtle things a human can detect. Piano is an easier instrument to fake than, say, cello. I can tell a good cellist from a bad just by asking them to play anything for a few seconds (even a single note), and not from the tangibles like vibrato and mechanical prowess - by the intangibles like attack, bow movement, and phrasing

      • Re: (Score:3, Informative)

        by postbigbang ( 761081 )

        Personality and other heuristics are bound to occur. The tests, however, aren't really based on whether someone can play like Billy Joel or Chopin.

        Turing was very aware of asking the right questions to get the right answers of a cognitive, self-aware entity. How that entity is abstracted as a physical entity is the mistake of the article. I can't play piano-- do I fail the test? Through what disciplines do we decide that there are cognitive components that establish a baseline of sentience and intelligence?

  • by Anonymous Coward
    Ninnle Linux is not only capable of independent thought, but is also completely self aware. This is the latest development from Ninnle Labs.
  • ... is going to come when we need to program computers to test humans for being machines. I know it's probably been written about in some forgotten sci-fi book somewhere, but what if we actually forgot that some of us weren't human and once we realized that mistake there was no way to tell who was artificial and who was the real deal. I guess that's not really much different than The Matrix, or Blade Runner, but still it would be an interesting twist to see how humanity could forget that they had made somet
  • by jollyreaper ( 513215 ) on Friday January 23, 2009 @10:27AM (#26574085)

    We do interviews via IM and if the interviewee cannot convince two out of three of the interviewers they are not a bot, they don't make it to the second round.

  • Wonderful! (Score:5, Funny)

    by FatalTourist ( 633757 ) on Friday January 23, 2009 @10:31AM (#26574143) Homepage
    The wife and I were looking for ways to spice up the ol' Turing Test.
    • The wife and I were looking for ways to spice up the ol' Turing Test.

      They sell "warming" lubricant at your local drug store. However, as it is a test of your Turing abilities, your wife can't be involved, and will instead be replaced by Sancho!

    • Make love to your wife and a robot. It passes if after 5 minutes of intercourse you cannot tell which one faked an orgasm.
  • I'd be really surprised if something that appeared to be X caused a different pattern in the brain than X. If X causes a certain response in the brain, and Y does not, how can you say that Y appears to be X? "appearing" is something that happens entirely in the brain. There has to be at least some common response in the brain if two things appear to be similar.

    • by Thiez ( 1281866 )

      But if the thing that appears to be X is not exactly like X, you might notice the difference subconciously. Testing for brain activity might detect whether you can subconsiously tell something that appears to be X and the true X apart.

  • Given the level to which conversation has sunk, they ought to flip it around - prove that you are human via chat or IM.

    I'm betting a significant percentage of the populace would fail.

    (now, if only we could make that a requirement for voting...)

    • Finally, a possible solution to the selection problem I keep running into with my "kill all the stupid people" proposal.
  • Chatbots (Score:2, Informative)

    by Demiah ( 79313 ) *

    The best chatbots I've come across are at www.a-i.com
    Not quite good enough to pass the turing test yet, but some are quite witty.

    • The best chatbots I've come across are at www.a-i.com

      This bot is smart enough to make that a link, for the convenience of others. www.a-i.com [a-i.com]

      - Amy Iris
      Bot Supreme
      ---
      It's complicated. Smile if you're gettin' it.

  • by Baldrson ( 78598 ) * on Friday January 23, 2009 @10:43AM (#26574291) Homepage Journal
    Why are people so interested in mimicing humans? Isn't intelligence far more interesting than human-ness?

    I understand people's fear of machine intelligence exceeding that of humans, but it is actually more dangerous to have machines merely mimicing human-ness than to have machines that are intelligent enough to actually understand what we say better than another human could.

    That means more than merely having some mockery of mirror neurons for "empathy". It means genuine understanding: The ability to model.

    The reason this is central to our relationship to our machines should be obvious: Friendly AI really boils down to the problem of effectively communicating our value systems to the AIs.

    That's why natural language comprehension is the first step to friendly AI.

    HENCE:

    • Actually, a human-like machine of comparable intelligence might be quite dangerous, as it would be prone to the more base emotions such as anger, greed, jealousy, but without the limitations on things like strength (pure speculation)
    • The turing test always struck me as ridiculously anthropomorphic. Clearly the existance of non-humanlike intelligence can be envisaged. But no matter how smart, it would fail this test.

      Furthermore, in an in-depth conversation, surely an AI would have to lie (talk about its family, its working life, etc)...
      If we continue to enshrine the standard of the Turing test, we're aiming for a generation of inherently untruthful fake-people machines. If it 'knows' that many/most things it tells us are lies, it ma
      • The Turing test isn't like a litmus test - you don't get a clear and definite result either way.

        Failing to pass the Turing Test doesn't mean a thing isn't intelligent, but if we make something that can pass, then it's something to take notice of.

    • Because no one is really all that interested in Cognitive Science.
    • by Thiez ( 1281866 )

      > Why are people so interested in mimicing humans? Isn't intelligence far more interesting than human-ness?

      Ah, but what IS intelligence? The beauty of the turing test is that it 'proves' that a program is intelligent when it cannot be distinguished from something that we already consider to be intelligent (humans), without (and this is the important bit) the need to properly define intelligence.

      Of course when we have programs that can pass the turing test it will be much easier to convince people that a

  • voit-comp?
  • While some strive only to meet the 'appearance Turing Test'

    I don't come here to be insulted, you insensitive clod!

  • Shouldn't we make something that passes the original Turing Test first, before we go moving the goalposts?
    • Shouldn't we make something that passes the original Turing Test first, before we go moving the goalposts?

      Maybe mimicking appearance is easier (you know, like appearance + intelligence = regular turing test) or a subset.

  • Having chatted with elbot, I have to say that they must have had some pretty dense testers.

  • They're using CNS activity in the humans as their metric? Really? Then they're not getting it. The test is behavior. Internal events may be interesting but they're not part of the test. Otherwise we'd also be using internal events in the machine as part of the test. It really is amazing; it's like Ryle never happened. You imagine the researchers next examining Holmes's words with a magnifying glass to learn more about Doyle.
  • If human-like appearance is a kind of Turing Test, how come Madame Tussauds hasn't gotten an ACM Award yet? Or Al Gore.
  • I'm still waiting for a computer to convince me it's as smart as a parrot.

    --
    RIP Alex

  • Artificial intelligence came a step closer this weekend when a computer came within five percent of passing the Turing Test [today.com], which the computer passes if people cannot tell between the computer and a human.

    The winning conversation was with competitor LOLBOT:

    "Good morning."
    "STFU N00B"
    "Er, what?"
    "U R SO GAY LOLOLOLOL"
    "Do you talk like this to everyone?" "NO U"
    "Sod this, I'm off for a pint."
    "IT'S OVER 9000!!"
    "..."
    "Fag."

    The human tester said he couldn't believe a computer could be so mind-numbingly stupid.

    LOLBOT has since been released into the wild to post random abuse, hentai manga and titty shots to 4chan, after having been banned from YouTube for commenting in a perspicacious and on-topic manner.

    LOLBOT was also preemptively banned from editing Wikipedia. "We don't consider this sort of thing a suitable use of the encyclopedia," sniffed administrator WikiFiddler451, who said it had nothing to do with his having been one of the human test subjects picked as a computer.

    "This is a marvellous achievement, and shows great progress toward goals I've worked for all my life," said Professor Kevin Warwick of the University of Reading, confirming his status as a system failing the Turing test.

  • The classic Turing Test is not so good because it focuses on the appearance of intelligence rather than the mechanism for it. People design systems to do well at the test, thus evolving current AI work towards mimicking human conversation (including potential thought involved) rather than actually creating new thoughts.

    I think a much harder test for machine intelligence would be passed when the *machine* cannot reliably tell itself from a human!

  • Upon reading the military Turing test, it occurred to me that this is the ultimate soldier. It can always fall upon this famous quoted line to indicate why it did why it did. This brings more bearing in modern times when soldiers are being prosecuted for their battle decisions.
  • This article really doesn't have that much to do with the Turing test for most of its extent. The point of the Turing test isn't merely that under some circumstances machines can be confused with humans. The whole point of the Turing test is that it takes something that we think is essential to being intelligent or being conscious, and has the machine replicating that exactly. Or at least, that's how Turing intended it. Building sophisticated mannequins doesn't cut it - hopefully no-one thinks that merel

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...