Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Science Technology

Human vs Computer Intelligence 421

DrLudicrous writes "The NYTimes is running an article regarding tests devised to differentiate from human and computer intelligence. One example are captchas, which can consists of a picture of words, angled and superimposed. A human will be able to read past the superposition, while a computer will not, and thus fails the test. It also goes a bit into some of Turing's predictions of what computers would be like by the year 2000."
This discussion has been archived. No new comments can be posted.

Human vs Computer Intelligence

Comments Filter:
  • Non-issue. (Score:3, Funny)

    by grub ( 11606 ) <slashdot@grub.net> on Tuesday December 10, 2002 @03:36PM (#4856484) Homepage Journal

    Anyone that has seen Star Trek:TNG knows that Data is a pretty smart fella.

  • I failed! (Score:4, Interesting)

    by Trusty Penfold ( 615679 ) <jon_edwards@spanners4us.com> on Tuesday December 10, 2002 @03:39PM (#4856529) Journal

    I did the gimpy [captcha.net] test.

    Results
    Result of the Test: FAIL

    You entered the following words:

    school
    tall
    warm

    The words possibly displayed in the image were:

    able
    tongue
    tongue
    full
    train
    pictur e
    shelf
    It switched pictures on me! Honest!!

  • can it get drunk?
  • by Fastball ( 91927 ) on Tuesday December 10, 2002 @03:40PM (#4856538) Journal
    The difference between computer and human intelligence is the human ability to revel in his. That is, taunt others. Until a computer can get in my grill and explain to me on a colorful fashion that I am nothing more than a grab-ass-tic piece of *human* sh!t, then I won't think much of computers.
    • for all you Red Dwarf Fans....two words...

      Smug Mode

  • by stratjakt ( 596332 ) on Tuesday December 10, 2002 @03:43PM (#4856573) Journal
    Welcome to The New York Times on the Web!
    For full access to our site, please complete this simple registration form.
    As a member, you'll enjoy:

    In-depth coverage and analysis of news events from The New York Times FREE
    Up-to-the-minute breaking news and developing stories FREE
    Exclusive Web-only features, classifieds, tools, multimedia and much, much more FREE
  • Is this a joke? (Score:4, Informative)

    by .sig ( 180877 ) on Tuesday December 10, 2002 @03:43PM (#4856579)
    Sure, computers aren't as smart as people. Wow.
    Computers are not good at complex pattern recognition. Wow.

    For the record, computers can recognize words like this, just not very easily. With a big enough dictionary and a lot of patience, you'd be suprised at what they can do. While still an undergrad I was able to write a rather simple program that would recognize images of the cardinal numerals, even if they were highly mangled, and worked with a grad student in building something that could pick out certain features of a rotated image and by comaring with some sample features, rotate the image correctly.
    • Don't forget, though. These articles are aimed at the kinds of people who don't understand that computers are nothing more than machines that follow instructions, and are no smarter than the instructions given to them.

      The semicoherent and coherent articles that talk about the capabilities of algorithms rather than the machines they run on are all in the research journals.
      • "computers... are no smarter than the instructions given to them"

        Neither are you or anyone else.

    • You should sell your program to Project Gutenberg.
  • Human intelligence (Score:5, Interesting)

    by Reality Master 101 ( 179095 ) <<moc.liamg> <ta> <101retsaMytilaeR>> on Tuesday December 10, 2002 @03:44PM (#4856592) Homepage Journal

    We are never going to have a machine that is truly "human". Let me explain.

    That doesn't mean we won't have intelligent machines that can do just about anything intellectually that a human can do. A human being is more than just a smart computer. Our behavior is governed not only by the higher logic of our brain, but also by millions of years of bizarre -- often obsolete -- instincts. If you yanked a brain out of a body and hooked it to a computer, it would no longer be truly human because of the lack of hormonal responses that come from every part of the body.

    It's simply going to be too hard/impractical and, frankly, useless to make an intelligent machine that mimicked every hormonal reaction and instinctual mechanism.

    We will have intelligent machines, but we will never have human machines.

    • And why bother (Score:4, Insightful)

      by DunbarTheInept ( 764 ) on Tuesday December 10, 2002 @03:54PM (#4856701) Homepage
      And furthermore, if what you need for the task at hand is a machine that behaves and thinks in every way just like a human being, then just hire a human being to do it.

      It's the differences between computers and humans that make computers so damn useful. Tell a human to add up a list of 200 numbers and he'll likely take a long time, and get the wrong answer because humans suck at repetative boring tasks beyond the limit of their attention spans.
    • by guidobot ( 526671 ) on Tuesday December 10, 2002 @03:56PM (#4856729)
      hey, that's a truly well thought out and insightful post. EXCEPT for that the article is about how to prevent computers from automatically signing up for yahoo accounts (or pretending to be human online). frankly, i don't think yahoo is interested in the "lack of hormonal responses coming from every part of the body" -- unless they can find a hormone-testing software package they can use as part of the registration process.

      RTFA... that applies to moderators too.

    • by djembe2k ( 604598 ) on Tuesday December 10, 2002 @04:02PM (#4856793)
      You're mixing up levels here.

      No computer will have hormones, or millions of years of evolution, or bad hair days, or dendrites, or lots of things we have. But that's all beneath the surface, as it were. Turing's point is that whatever intelligence is beneath the surface, ultimately all we see if the phenomena of intelligence, its outward manifestations. If I decide whether or not you are an intelligent human (as opposed to a computer or a coffee table or a CD playing your voice), I don't see the gears turning inside your head, or really care if you've got actual gears or not. I just interact with you, and get an impression.

      The idea here is that to pass Turing's test, you create a machine with the outward appearance of all of those things, by abstracting the phenomena from the underlying causes.

      What your argument gets closer to is a slightly different point. Why would we want to create a computer that is indistinguishable from people? People make mistakes in their addition. People lie. People get depression and schizophrenia. People can be bastards. People don't want you to turn them off, and will fight like hell to stop you from doing it. If we really create an accurate simulation of human intelligence, one that acts like a person with neurons and hormones and everything else, you get all this baggage with it.

      I'd really like intelligent agents to search the web for me, to remind me about things I didn't tell them to remind me about, whatever. But I don't see the practical need to create a Turing testable machine, unless it is really an interim step by the AI gurus to get to the programs I want. Now, there may be a theoretical need, a human drive to create Turning's definition of AI because the gauntlet has been thrown down, but that's a different animal, ironically enough.

      • by Anonymous Coward
        Actually, the biggest difference between computers and humans is their ability to interact with the world. A robot is a hybrid of the two, but even robots do not have the ability to sense everything that humans can, especially not at the same time. So yes, it is impractical and probably even undesireable to mimic a human completely because all of those stimuli that we experience every day make us very unpredictable. Our algorithms (emotions) are so complex that we can't even fully describe them, much less replicate them in a robot/computer. To top it off, humans have an enourmous concept of state (memory) such that a single experience in early childhood can affect us decades later. Sure we could mimic this kind of resources and sensing in a computer/robot, but would it make the robot/computer more useful to us?
    • by Esteban ( 54212 ) on Tuesday December 10, 2002 @04:15PM (#4856908) Journal
      That humans are too complicated for us to reproduce artificially is an empirical claim, and it's one that I think is likely true.

      Even if it turned out that we were able to produce what we'd now count as a "human machine," I think that we would then deny that it was human. That is, I suspect that it's a conceptual claim that there will never be any such thing as a human machine.

      No matter how human or intelligent a machine is, it'll never count as human (or even fully possessed of human intelligence, whatever that is), since the bar will be raised. (Consider that at one point, people thought the hallmark of being human was being rational and that the characteristic activity of rational beings was doing math...)

      When we've got a machine that passes all of the existing tests, someone'll ask "but why doesn't it cry during 'Sleepless in Seattle'?" or "why doesn't it hate Jar Jar?" or "does it get easily embarassed?"
    • Sort of (Score:3, Interesting)

      by 0x0d0a ( 568518 )
      You can say "there are physical differences between person A and computer B", but the problem is that person A and person C also differ quite a bit.

      Saying "foo cannot be done" frequently results in someone being utterly wrong. Just a few decades ago, the idea of atomic power would have been laughable -- the ability to wipe an *entire city* away? How about having a person walk around on the moon? Unthinkable.

      So, at the moment it seems to be an insurmountably difficult problem. But, a few years ago, the same thing would have been said about problems that we're not starting to think about being doable via quantum computers -- the face of computer science literally changed.
    • It's simply going to be too hard/impractical and, frankly, useless to make an intelligent machine that mimicked every hormonal reaction and instinctual mechanism.

      Funny. Similar things were said about creating GUIs.

      -Bill
    • by efflux ( 587195 )
      Our behavior is governed not only by the higher logic of our brain, but also by millions of years of bizarre -- often obsolete -- instincts.

      Please elaborate on what you mean by instinct. How does this differ from any other algorithm? Certainly it was created by evolutionary processes, but we can also conceive of an algorithm where the algorithm itself is compartmentalized and acteded upon by a Genetic Algorithm, thus simulating evolution. We may not expect the resulting algorithm to be very usefull due to the complexity/nuances of selection, yet it should certainly do something.

      If you yanked a brain out of a body and hooked it to a computer, it would no longer be truly human because of the lack of hormonal responses that come from every part of the body.
      A couple of points:
      1) What is human? You have not defined what it is to be human, therefore, it becomes impossible to say unequivocally what it is NOT to be human. 2) Hormonal responses can be looked at in a variety of ways: 1) Such responses, in fact, are simply another stimulus. We would expect any intelligent machine to react differently under a different set of stimuli.
      2) The endocrine system also comprises the machine that is "human intelligence" and by removing a part of the machine, we, in effect, cripple it.

      As a final point, we are not interested in Human machines per se. Simply machine that are human-like, primarily intelligent in a manner that we may communicate with them and share a semblance of understanding.
  • I was looking through the times and saw this article, did a search through google on the term "captchas", and based on the speed of the page's return, i immediately knew that there was a slashdot article.

    I'd like to see AI figure THAT one out! I call it Automatic Slashdot Slowdown Effect Detection, or ASSED for short.
  • by Tsar ( 536185 ) on Tuesday December 10, 2002 @03:46PM (#4856610) Homepage Journal
    Meanwhile, Yahoo will have to install a new Captcha that is resistant to Dr. Mori's program. This task will fall to Dr. Manber's successor, since Dr. Manber moved to a new position last month as chief algorithms officer for Amazon.com. There, he said, he plans to continue his collaborations with academic researchers.

    Chief Algorithms Officer!!! I don't know about the rest of you nerds, but I'd sell my last Keuffel & Esser [att.com] to get a crack at a job like that.
    • Even better title (Score:3, Informative)

      by devphil ( 51341 )


      Visit the homepage of the and scroll down or search for the entry for Eric Jacobsen. Proof that not everybody at Intel is a soulless corporate whore.

  • by Flamesplash ( 469287 ) on Tuesday December 10, 2002 @03:46PM (#4856612) Homepage Journal
    A I mentioned at the bottom of this journal [slashdot.org] entry. I think a new version of the Turing test should be whether a computer can tell the difference between a Human and a Computer.
  • by sielwolf ( 246764 ) on Tuesday December 10, 2002 @03:46PM (#4856618) Homepage Journal
    [Kicks first man in balls]
    First Man [falls over]: "AAAAAHH!"
    Me: "Human."
    [Kicks second man in balls]
    Second Man [falls over]: "Gffffff-!"
    Me: "Human."
    [Kicks third man in balls]
    Third Man [falls over]: "..."
    Me: "He's the robot! Get 'im!!!"
  • of the yahoo sign up, and leave the word guessing to a middle school kid who gets paid in PS2 games?

    Computers are good for repetitive tasks, middle school kids are easily bribed.

  • by Ryu2 ( 89645 ) on Tuesday December 10, 2002 @03:47PM (#4856631) Homepage Journal
    These tests seem to be all visual in nature. Could this be a point of contention on the part of blind/visually impaired users of web sites?

    Or alternatively, are they perhaps working on, say, a audio version? Wonder how would that work.
    • by bytesmythe ( 58644 ) <bytesmythe&gmail,com> on Tuesday December 10, 2002 @03:52PM (#4856665)
      Apparently, there would have to be alt tags that read "Type the word FOO to signify you are a human, not a register bot."
      I suppose it could generate a spoken list of words in a sound file that is linked to from the image. The alt tag could then read "Please click to listen to a series of words. Enter the words to signify you are a human, not a register bot."
      • Braille terminals (Score:3, Interesting)

        by yerricde ( 125198 )

        I suppose it could generate a spoken list of words in a sound file that is linked to from the image.

        The CAPTCHA web site has such a test, but of the sites that use image-based bot tests, only PayPal offers an audio alternative.

        Another problem is that sites often present the tests in proprietary formats with expensive implementation royalties, such as .gif and .mp3.

        But even providing both the image in a free image format (.png) and the audio in a free audio format (.ogg) won't help blind users behind a Braille terminal without a speaker, such as blind-deaf users.

        • I guess blind-deaf users need day-to-day-help anyway so there should be another person there all the time.
          • by yerricde ( 125198 )

            I guess blind-deaf users need day-to-day-help anyway

            So what about Braille terminal users who aren't also deaf? Should Section 508 compliance (required for USA government web sites) allow a web site to require all blind users to have sound cards? /p)

    • Yes, if you look at the captcha site [captcha.net], it lists "Sounds" under Captchas. Here's the text:

      Sounds can be thought of as a sound version of Gimpy. The program picks a word or a sequence of numbers at random, renders the word or the numbers into a sound clip and distorts the clip. It then presents the distorted sound clip to its user and asks the user to type in the contents of the sound clip.

      This would probably be similar to the visual techniques, most likely employing some audio filters so its hard for a computer to decipher (our ears are pretty sensitive in deciphering noise from actual voices/useful sounds, so it shouldn't be a problem for us)
  • Philosophy 101 (Score:4, Informative)

    by The Jonas ( 623192 ) on Tuesday December 10, 2002 @03:48PM (#4856634)
    Searle's Chinese Room theory. Strong AI vs. Weak AI and human interaction/interpretation. Fun Stuff. http://www.utm.edu/research/iep/c/chineser.htm
    • Re:Philosophy 101 (Score:3, Insightful)

      by majcher ( 26219 )
      The Chinese Room argument is a load of pants. I don't believe it is taken seriously by anyone in the field these days - it has had a large number of holes poked in it, and Searle's reply to each of these flaws is basically, "nuh-uh!"

      Too lazy. Find the links yourself.
  • A good step towards AI would be to emuulate natural stupidity.
  • by Elladan ( 17598 ) on Tuesday December 10, 2002 @03:49PM (#4856642)

    You entered: noses

    Possible responses: nose

    Result: FAIL.

    Wohoo! I'm a robot! This test proves it! Vegas here I come!

    Why does this test make me feel like I just had a run-in with John Ashcroft?

    • I missed almost all of the picture ones...

      You entered: televisions
      Possible responses: television tv
      Result: FAIL.

      So next time...

      You entered: bike
      Possible responses: bicycle bicycles
      Result: FAIL.

      And again...

      You entered: toothbrushes
      Possible responses: toothbrush
      Result: FAIL.

      AAAAAAAAAARGH!!! I hate stupid word guessing programs that don't consistently account for common abbreviations and plurals!

      • You entered: toothbrushes
        Possible responses: toothbrush
        Result: FAIL.

        AAAAAAAAAARGH!!! I hate stupid word guessing programs that don't
        consistently account for common abbreviations and plurals!


        Ahh, delightful irony. That would be the point, then, wouldn't it?

        In other words, you have to be smarter than the tools you use, so it's pretty stupid to put a computer that is *not* intelligent in charge of deciding the intelligence of others.

  • by syntap ( 242090 ) on Tuesday December 10, 2002 @03:50PM (#4856653)
    Another area not discussed in the article is vicarious experience, that is, experience and knowledge you have because some cause and effect relationship existed with someone or something else.

    For example, the computer's tactile interface has to touch the oven and say 110 deg C, as opposed to taking as fact "I heard a human mention that Unit 5 already did that and it was 110 deg C, so I accept it as fact that it is 110 deg C".

    I know I'll get modded down for this, but I wonder what the limits of questioning the computer / human participants was? (Article said they quized participants to see if they could tell who was human and who was a machine). Like, could they ask "What number am I thinking of?" The machine would blank out and the human would stupidly blurt out "69 dude!"
  • Wanna bet? (Score:5, Informative)

    by dubl-u ( 51156 ) <2523987012&pota,to> on Tuesday December 10, 2002 @03:52PM (#4856664)
    Mitch Kapor and Ray Kurzweil have bet $20,000 on whether a computer will pass the Turing Test by 2029 [longbets.org].
  • Some time ago, I was pondering a similar idea which I called "Think Cash" (a play on "Hash Cash"), where basically someone had to "pay" by thinking about something. The idea was to discourage automated spamming of anonymous services.

    While I mention some ways to achieve this, I thought more about the problem and the qualities a solution would need, than the solution itself.

    If interested, more can be found here [half-empty.org].

  • by AmishSlayer ( 324267 ) on Tuesday December 10, 2002 @03:52PM (#4856671) Homepage
    no joke, I got this...

    acid/head
    acid/head
    acid/great
    acid/angry
    bo x/box ... my question is how'd they know?
  • We'll obviously be able to travel to the future... to get new technology...
  • African or European? (Score:3, Informative)

    by Bastian ( 66383 ) on Tuesday December 10, 2002 @03:53PM (#4856693)
    Whoever said that computers can't handle superposition has never heard of convolutional neural networks. [lecun.com]

    Really, comparing human intelligence to computer intelligence doesn't seem like a good idea unless we're going to define what kind of computer intelligence it is.
    Neural computing really screws the comparison up - the kinds of computing that normal computers are good for are quite different from the kinds of computing that neural nets are well suited to. Furthermore, different neural net architectures make for different capabilities - the tasks a feedforward network are best suited to are very different from the tasks a bayesian network are best suited to.

    Take a look at this page [aist.go.jp] for a good run-though of the different kinds of nets.
    • I think the idea is that you are probably not going to be using your expensive, experimental neural network to create spam accounts and troll chat rooms for personal info.

      Which, by the way, gives me a great idea. I'm going to adapt that annoying psychoanalyst algorithm to create Slashdot accounts and randomly respond to posts in high volume. Not only will it be fun for all ages, but it will actually increase the infamous Signal to Noise ratio for Slashdot!

      • Convolutional neural nets are really quite simple and easy to implement and train. Granted, it probably isn't a problem with regular spammers, but I bet students from MIT and CMU would have the gumption to get around to doing it again.
  • by craenor ( 623901 ) on Tuesday December 10, 2002 @03:55PM (#4856716) Homepage
    Involves measuring pupil dilation when asked a series of personal questions...very good method.
  • by TheWhaleShark ( 414271 ) on Tuesday December 10, 2002 @03:56PM (#4856719) Journal
    While it would be nifty to have a very human AI, I somehow doubt that we could truly ever replicate human intelligence.

    The major roadblock is that a computer can only respond in ways that it has been programmed to do so. While you can code incredibly complex AI algorithms and simulate an incredibly complex level of intelligence, the fact remains that a computer invariably operates along rigid pathways.

    It can be argued that human thought is nothing more than a complex series of chemical reactions, but there is far less rigid logic involved in human thinking. Indeed, we're still not entirely sure just HOW we think.

    Never say never, but I don't think we'll be seeing a truly human AI before any of us is dead.
  • by binaryDigit ( 557647 ) on Tuesday December 10, 2002 @03:56PM (#4856731)
    To be considered truely "intelligent" a computer must:

    1. Make a "first post" posting 15 minutes after the article goes up.

    2. Be the fourth person to enter a "In Soviet Russia ..." post.

    3. Be labeled a karma whore.

    4. Whine about the masiv tipe ohs in artaculs.

    5. Hate M$, Sony, MPAA because thats one of the three laws right?
  • by Tsar ( 536185 ) on Tuesday December 10, 2002 @03:57PM (#4856732) Homepage Journal
    The home page of the CAPTCHA Website [captcha.net] refers to an event in Slashdot history!

    CAPTCHAs have several applications for practical security, including (but not limited to):
    Online Polls. In November 1999, http://www.slashdot.com released an online poll asking which was the best graduate school in computer science (a dangerous question to ask over the web!). As is the case with most online polls, IP addresses of voters were recorded in order to prevent single users from voting more than once. However, students at Carnegie Mellon found a way to stuff the ballots using programs that voted for CMU thousands of times. CMU's score started growing rapidly. The next day, students at MIT wrote their own program and the poll became a contest between voting "bots". MIT finished with 21,156 votes, Carnegie Mellon with 21,032 and every other school with less than 1,000. Can the result of any online poll be trusted? Not unless the poll requires that only humans can vote.
    Cool, eh?
    • by guidobot ( 526671 ) on Tuesday December 10, 2002 @04:23PM (#4856990)
      I remember that -- i was a student at CMU at the time. Someone posted to a widely read messageboard (misc.market) about the poll, and it was off to the races after that. Pretty funny.

      A related story was the time I saw on Boston.com that one of their editors was getting a haircut and they had posted an online poll for users to choose a style. Remembering CMU's adventures in slashdot polling, I posted to that same messageboard a plea for students to help the poor guy out.

      4000 robo-votes later, he had a mohawk. Then they showed pictures of him going home for mother's day, and his dad's embarassed look. The best part was the quote from the editor at the end of the story -- "I had fun with this and I hope all those hackers out there did too."

      So, see, geeks? You too can make a difference.

  • Maybe.. (Score:4, Interesting)

    by jedie ( 546466 ) on Tuesday December 10, 2002 @03:58PM (#4856741) Homepage
    Just maybe, if WE were smarter, we could make machines that are smarter. But then again, if WE were smarter, the level of intelligence that the machine reaches at that point is again lower compaired to our own intelligence.

    What I mean is, I don't think an intelligent being would be capable of creating something that is more intelligent than himself.
    The machines need to be programmed by humans, who are limited by their own inteligence.

    Can God make a rock so big that he can't carry it himself?

    • Re:Maybe.. (Score:5, Funny)

      by anthony_dipierro ( 543308 ) on Tuesday December 10, 2002 @04:17PM (#4856928) Journal

      What I mean is, I don't think an intelligent being would be capable of creating something that is more intelligent than himself.

      My dad was :).

    • Re:Maybe.. (Score:4, Insightful)

      by guidobot ( 526671 ) on Tuesday December 10, 2002 @04:29PM (#4857033)
      I think you need to take into account that any intelligent machine would make use of learning algorithms. So if man can teach a computer how to LEARN, and its got the time and resources to learn more than a human (say hundreds of processor years and a connection to the all the world's media), then the end result could be something "more intelligent" than the programmer.

      Or how about the example of the AI chess players, who can play vastly better than the people who programmed them?

    • Re:Maybe.. (Score:3, Interesting)

      by davew2040 ( 300953 )
      I could see two primary reasons for considering this an oversimplification.

      The first of these is the nature of hardware. Obviously, electronic hardware is much different than human hardware. Human hardware has a tendency to gradually improve between the age of 0 years to, say, 30 years, of if you subscribe to a different theory of learning, then 0 years to on average 80 years (being death). Factoring in evolution, there's some further gradual enhancement over the course of a million years. Computer hardware, on the other hand, has a tendency to improve itself at an impressive rate that depends on how much effort humans put into it. The end result being that for certain tasks, computers can vastly exceed humans. The reverse, that humans can vastly exceed computers, is also true, but as time goes on, this will probably end up being less the case. And, as anyone who's worked in teams on a technical project knows, it's difficult to make cumulative human effort scale upwards. This technological task can also be difficult with computers, but generally less so. So the point of all this being that there are simply fantastic computational levels that computers as a whole are able to achieve, to be applied to tasks of "intelligence" for better or worse in a way that humans can't compete with.

      The second point has already been touched upon: humans die, computers don't. I mean, you can make the claim that computer parts fail, but the fact remains that data and algorithms are passed from one generation to the next (hopefully) unchanged. The base of innovation built for computers really just expands. Humans, on the other hand, build their own innovation, but must then spend time teaching their successive generations how to do things, and for exceptionally bright individuals, the successors may not even reach their amassed abilities. No need to launch into arguments like "but software needs to be recompiled for different platforms!", that kind of talk is counterproductive.

      I suspect that anyone familiar with Linux has a certain appreciation for having complete control over what's on their system, but the fact is that increasing complexity will increasingly result in increasing layers of abstraction, to the point where everything is built upon layers that are further built upon layers. The advantages (and problems) associated with this are (painfully?) evident now, and computers are still relatively new; imagine things 50 years down the road! Once methods of software engineering are designed that lower the occurrence of bugs make things more fault-tolerant, it's just going to be commonplace, if it isn't already.

      So what I'm saying is, it's an interesting academic question, but in a lot of ways the potential clearly exists for computers outpacing anything that humans can do. Not unlike a teacher can instruct a brilliant and eager student to the point where the teacher actually becomes the student.
  • Taco Test (Score:5, Funny)

    by plip ( 630579 ) on Tuesday December 10, 2002 @03:59PM (#4856753)
    I simply use my "Taco Test" (Inspired by the Invader Zim cartoon) to thwart chat bots and telemarketers. It's an amazing, powerful test that no computer or automated script can withstand.

    I ask the "suspected bot" if they like tacos. If they give me an intelligent answer, they are not a bot. If they give me an answer like "Wanna see my hot pics go to http://192.168.1.112/hotbabezzzz.pl?2345" Then they are a bot.

    This test also works on telemarketers in a slightly different fashion. I tell them to "STOP... I'll only buy your product if you send me a taco with it. If not, no deal." since there are big logistical problems with sending me a taco, they are thwarted every time. I'm sure this test would work equally well with any obscure food item.
  • Article -1 redundant (Score:2, Interesting)

    by bcwalrus ( 514670 )
    Why don't you devise a test which asks for the sum of 10000 numbers?
  • It won't work... (Score:5, Insightful)

    by Quaoar ( 614366 ) on Tuesday December 10, 2002 @04:01PM (#4856776)
    Computers can be specifically programmed to solve puzzles such as a this...if a test arises that supposedly tests for "human" intelligence, humans can simply modify the code so that it can solve that sort of puzzle.

    That's what Gary Kasparov was complaing about when he played against Deep Blue the first time...there was a whole team of IBM programmers modifying the code during the game to specifically counter Kasparov's playing style. It wasn't a reflection of machine intelligence, it was an example of human adaptation imposed upon Deep Blue.
  • tech econ boost? (Score:2, Insightful)

    by Tablizer ( 95088 )
    Amazing all this horsepower and research just to combat spam. Just might be the boost we need to get tech spending going again. A never-ending cat-and-mouse game where the cats and mouses get bigger and bigger. This racket is almost as good as the dot-com racket. I don't like spam either, but I miss real paychecks.

    The first true AI machine might be spam catcher. Spamminator 2000!
  • by photon317 ( 208409 ) on Tuesday December 10, 2002 @04:02PM (#4856794)

    Once you devise a test system, someone can write non-AI software that can fake it and pretend to be human by knowing what it needs to for the test. Only a real human can tell human and machine intelligence apart, not a systematic test. That's why Bladerunners had to manually test the androids, instead of just letting a machine do it. Real-time human insight is key to testing machine intelligence.
  • by Zordak ( 123132 ) on Tuesday December 10, 2002 @04:07PM (#4856830) Homepage Journal
    The CAPTCHA website [captcha.net] (how do you pronounce that, anyway) has a list of possible applications of CAPTCHA. The first mention is online polls, and recalls an event in 1999, when Slashdot (they use http://www.slashdot.com [slashdot.com] for some reason) had a poll for the best graduate CS curriculum. Carnegie-Mellon and MIT wrote competing poll-bots that stuffed the poll boxes. The point was supposed to be that a CAPTCHA would have prevented this. In my opinion, however, this was probably the most accurate Slashdot poll ever. Obviously, MIT wrote the better poll bot, since it stuffed more votes, and they didn't even start until somebody noticed that CMU was stuffing. Hence, the winner of the stuffing contest turned out to be the true winner of the poll.
  • by Obfuscant ( 592200 ) on Tuesday December 10, 2002 @04:10PM (#4856851)
    This demonstration is not one of computer versus human intelligence. It is one of computer versus human cognition.

    In other words, can the computer detect the information in the same form that the human can? Can a human read a grocery store bar-code as easily as a computer? No. Can a human read one of those bit-boxes on the FedEx shipping label as easily as a computer? No. Can a human read the Tivo-data sent on the Discovery channel as easily as the computer? No. But none of those failures means the computer is more intelligent, just more capable of recognizing the information that is there.

    Both the computer and the human can recognize "moon/parma", but intelligence comes into play when the human starts thinking of Drew Carey and humming the theme music. Intelligence is not just collecting information, it is doing something useful with that information.

  • by ip_vjl ( 410654 ) on Tuesday December 10, 2002 @04:15PM (#4856899) Homepage
    I took the 'Stumpy' test - where it shows you six pictures and asks you to choose a word that describes them.

    Looks like their system is hosed right now because it showed me 4 pictures of horses, 1 of a cowboy, and one of a turtle.

    When it asked:
    What are these pictures of?

    I answered "things"

    apparently it didn't like my answer.

    Funny thing though, the images are being pulled by image number from the getty images database. You could write a piece of software to lookup the images at getty, pull the keyword list (that getty assigns to all photos) and cross reference the list to get the answer.

    --

    Then this got me thinking about the whole thing in general. My answer WAS correct. Reminds me of the Cheers episode where Cliff is on Jeopardy and answers the final Jeopardy question:
    "Who are three people who have never been in my kitchen."

    Not the answer they were looking for, but is it wrong?

    I was being a smartass the other day while watching sesame street with my daughter. They had pictures of 4 animals and asked which one didn't belong.
    kangaroo
    rabbit
    grasshopper
    fish

    they, of course, were looking for 'fish' - because the other three live on land or travel by hopping.

    I popped up that the answer could be the kangaroo - because the other three are native to north america. Or it could be the grasshopper, as the only one with an exoskeleton.

    My wife reminded me that it was a kid's show. :)

    It comes down to the fact that if an strict mechanism is used to judge the answers (like a computer) it may not be able to handle legitimate answers from humans.

    --

    Seems both the questioner and questionee need to be intelligent to participate.

  • by CommandNotFound ( 571326 ) on Tuesday December 10, 2002 @04:15PM (#4856901)
    "Asking if a computer can think is like asking if a submarine can swim"
  • by phorm ( 591458 ) on Tuesday December 10, 2002 @04:17PM (#4856933) Journal
    Did anyone notice that a lot of these "human" test are also the same ones used for things like hearing/eye tests, color-vision impairment, etc.

    This knocks out computers, which lack the intelligence/programming (so far) to differentiate between conflicting objects to make out a letter/numbers.

    It also may knock out humans with vision problems though, especially those with colour-vision issues.For those with hearing problems, the sound test isn't good either.

    It seems that right now, computers trying to translate these puzzles probably perform along par with old-folks. This also might mean that quite a few seniors may have issues getting a yahoo account though.
  • by dirvish ( 574948 )
    Am I just stupid or is the Stumpy [captcha.net] not working quite right?
  • Trying the test that was on the NY times article at the original test at
    http://www.captcha.net/cgi-bin/pix

    I saw turtles. Turtles of which some were swimming. So i typed turtles.
    And i FAILED.

    "Result of the Test: FAIL
    You entered the following word:
    turtles
    The possible words were:
    seashell shell shells seashells"

    So, i notice this test does not take into the consideration the limits of second (or generally, non-native) language. English is not my first language and i had seen nowhere that turtles and shells are different?? i saw turtles and some turtles that were in the sea. Turtles.

    Uh yea. I take proudly failing in this computer or human test!!! wohoo!! :D
  • an odd tangent... (Score:3, Interesting)

    by Exantrius ( 43176 ) on Tuesday December 10, 2002 @04:41PM (#4857136)
    *note, I'm not sure now that this has anything at all to do with the topic, but it's something that bounces off my head sporadically*

    Recently, I've been working with developmentally disabled people as a job coach-- Making sure they have the ability to do the job they're supposed to, and help them to understand anything that needs to happen.

    Part of this is working at a local fast food restaurant. The girl I'm working with can do math fairly well, but she has problems with logic, and pattern matching.

    And a few times I started thinking about her as a computer-- She can do math fine, and if I specifically tell her how to match a pattern, she can do it for a short time, but she can't do it in situations like when people order a combo. Let's say they order a #1 with onion rings and a small drink, a #6 large combo, and a kids meal, she won't be able to recognize them as "combos" (she'll read the whole thing back to them item by item.) This brings me to a whole other tangent about user interface design (why the normal methods suck, mostly), but that'll be saved until a proper time.

    This has been a difficulty with her position as a cashier, but I find it interesting that I'm more or less programming her as I talk to her and re-affirm her patterns to match.

    I wonder if certain disabled humans would fail any "turing" test that were given to them, because they don't have normal pattern matching ability. Furthermore, isn't it possible that instead of trying for fully developed Artificial Intelligence, we should look at perhaps emulating those with disabilities? After all, wouldn't this creation process be easier than a "fully aware", fully pattern realizing person?

    AI has always interested me, but I don't know nearly enough about it. The thing that made me notice this is I keep talking to her like I would program a computer ("If this, then that, otherwise this other thing" and "While there is someone in line, take their order").

    Maybe I'm off based, or this is already an accepted practice. Can anyone correct me? /ex
  • Screw-ups are good (Score:4, Interesting)

    by A non moose cow ( 610391 ) <slashdot@rilo.org> on Tuesday December 10, 2002 @05:47PM (#4857748) Journal
    I think that AI will fully be realized when machines have the ability to make unguided decisions and learn from their mistakes.

    For example, a common human mistake is to send friends 20 meg bitmap photos through email. There is intelligence displayed in this act, in that they actively choose to digitize photographs and send them to someone without express request from the friend. They later learn that the size and type of file makes a difference, and future pictures are sent as jpegs. This mistake is made because the person does not understand how these files are represented in a computer, they only see the picture on the monitor and are satisfied that all is well.

    I would expect an intelligent system to make loads of mistakes like this, simply because it is not familiar with how things are handled with respect to humans. I would expect a computer that has intelligence to recognize that in most cases a jpeg file and a bmp file are interchangeable as far as humans are concerned. Based on this, I would expect it to prefer jpegs because of storage issues. I would expect it to infer that perhaps other datatypes can be compressed in this way. I would expect it to make the mistake that it is ok to compress a precise data file using some kind of lossful compression.

    This would show intelligence, because it was drwaing conclusions from what it already knew. Since computers do not get meaning from the contents of, say, PDF files, they might be likely to screw them up using, say, jpeg compression because there are no apparent consequences for reducing the file size.

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...