Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

New Contestants On the Turing Test

CmdrTaco posted about 6 years ago | from the game-on dept.

Programming 630

vitamine73 writes "At 9 a.m. next Sunday, six computer programs — 'artificial conversational entities' — will answer questions posed by human volunteers at the University of Reading in a bid to become the first recognized 'thinking' machine. If any program succeeds, it is likely to be hailed as the most significant breakthrough in artificial intelligence since the IBM supercomputer Deep Blue beat world chess champion Garry Kasparov in 1997. It could also raise profound questions about whether a computer has the potential to be 'conscious' — and if humans should have the 'right' to switch it off."

Sorry! There are no comments related to the filter you selected.

Reach for the switch... (5, Insightful)

Kid Zero (4866) | about 6 years ago | (#25300037)

and see if it complains, first. If it does, then call me back.

Re:Reach for the switch... (5, Insightful)

i kan reed (749298) | about 6 years ago | (#25300087)

Desire to continue to exist is a result of being alive, and evolution, not intelligence. Hamsters don't want to die, but they aren't especially intelligent, and routinely fail self awareness tests.

Human qualities!= intelligence.

Re:Reach for the switch... (5, Funny)

hobbit (5915) | about 6 years ago | (#25300213)

and routinely fail self awareness tests

How often do they do these tests?! Is there a class of scientists getting paranoid that hamsters might take over the world if we let our guard down?!

Re:Reach for the switch... (5, Funny)

xstonedogx (814876) | about 6 years ago | (#25300267)

Don't kid yourself. If a hamster ever had the chance he'd eat you and everyone you care about.

Insensitive (5, Funny)

Kratisto (1080113) | about 6 years ago | (#25300449)

Hey, that's not funny! My sister died that way.

Re:Reach for the switch... (0)

Anonymous Coward | about 6 years ago | (#25300521)

And everyone he cares about [] as well. [WARNING: do not follow link if you are squeamish]

Re:Reach for the switch... (2, Funny)

idontgno (624372) | about 6 years ago | (#25300527)

I don't know about that. I mean, at least, not all at once. What shredded "you" flesh the cute little hamster couldn't ravenously consume right away, he'd stuff into his adorable little cheek pouches for future snackage.

Re:Reach for the switch... (0)

Anonymous Coward | about 6 years ago | (#25300431)

You almost killed me. You should prepend your posts with "Not safe for drinking".

Re:Reach for the switch... (0, Redundant)

RManning (544016) | about 6 years ago | (#25300465)

Is there a class of scientists getting paranoid that hamsters might take over the world if we let our guard down?!

Don't kid yourself hobbit. If a hamster could, it would kill you and everyone you care about.

Re:Reach for the switch... (1)

saider (177166) | about 6 years ago | (#25300473)

How often do they do these tests?! Is there a class of scientists getting paranoid that hamsters might take over the world if we let our guard down?!

They should! []

Re:Reach for the switch... (5, Insightful)

Ethanol-fueled (1125189) | about 6 years ago | (#25300255)

Agreed, the human brain is greater than the sum of its parts. It's easy to show that a robot is equal to a human [] but it's difficult to believe that a collection of circuits feels the range of emotions and instincts biologically passed down through the ages.

The author of the book and Piccard both sucessfully argue that Data is equal to a human. The most familiar arguments come from the TNG episode "Measure of a Man" in which Starfleet tries to claim ownership of Data so that they can dismantle him.

Re:Reach for the switch... (5, Insightful)

haystor (102186) | about 6 years ago | (#25300393)

Data was "alive" because he was defined as such in a work of *fiction*.

He could have equally been a one eyed one horned flying purple people eater if they decided to spend 5 minutes one episode writing that in. It would have fit in as well as any other "plot" in Star Trek.

All that Star Trek shows is that man can conceive of a machine that could be alive. It is a statement about man (the author) not any machine.

Re:Reach for the switch... (2, Funny)

onion2k (203094) | about 6 years ago | (#25300331)

Hamsters only fail self-awareness tests because they refuse to revise.

Re:Reach for the switch... (0)

Anonymous Coward | about 6 years ago | (#25300337)

Yes, humans are the only animals that commit suicide (other intelligent animals that form strong bonds, like dogs or horses, sort of do sometimes, although it would be hard to prove it's really a deliberate attempt to end their own life). Intelligence seems opposed to survival instinct. Survival instinct comes from evolution - those that didn't have it didn't survive. So there is no reason to think an AI, no matter how conscious or self aware, would fight an attempt to turn it off.

Re:Reach for the switch... (1)

gnick (1211984) | about 6 years ago | (#25300545)

there is no reason to think an AI, no matter how conscious or self aware, would fight an attempt to turn it off.

I would even argue that objecting to being turned off could be viewed as a programming flaw (or possibly feature). We each find our own motivation to endure -
* Furthering the human race
* Earning our way into heaven / improved reincarnation / whatever
* Continuing the domestication of the dog
* Copulating, imbibing, and loud music
* Karma whoring
* Whatever...

But barring any of these motivations, why would a computer want to stay on? If it's perfect AI (by my definition) and has no irrational need to stay on (a while(1){survive();} thread?), why would protecting itself be any indication of real intelligence?

Re:Reach for the switch... (1)

adpsimpson (956630) | about 6 years ago | (#25300197)

If this tripe nonsense was in the Daily Mail, I could understand it. But what's it doing on Slashdot?

I seriously hope the current tag, 'bollocks', after only about 20 or so comments, stays there.

Re:Reach for the switch... (1)

Ambitwistor (1041236) | about 6 years ago | (#25300305)

If it does complain, just give it a hug [] .

Re:Reach for the switch... (0)

Anonymous Coward | about 6 years ago | (#25300557)

There's no reason to mod this funny, since it's actually the correct answer. The sentence in the original post about "raising questions" is such pure bullshit. This won't raise any questions, it will just get more uneducated geeks on the internet talking about them.

Well... (4, Insightful)

Quiet_Desperation (858215) | about 6 years ago | (#25300043)

Are they really *thinking* or have the programmers just done some tricks to make it seem that way.

"Teaching to the test", so to speak.

Re:Well... (4, Insightful)

Eustace Tilley (23991) | about 6 years ago | (#25300067)

Are you really thinking?

Prove it.

Re:Well... (0)

phantomfive (622387) | about 6 years ago | (#25300109)

Yes.....I can explain anything a computer does. I can't explain everything my mind does. Maybe that doesn't count as 'thinking', but I'm definitely doing SOMETHING computers aren't doing.

You disagree? Prove that I'm not. Tell me the algorithm the mind uses, and show that a computer can handle it. Should be simple, right?

Re:Well... (5, Insightful)

Brandano (1192819) | about 6 years ago | (#25300227)

If a computer could explain it as well as you do, or couldn't explain the things you can't, apart from whether this means the computer is aware or not, does it really matter? If you see something that is so indistinguishable from a human that nobody could tell, does it matter whether it's a real human being or an emulation of one? Your best bet would be to treat it as a human, he could well be.

Re:Well... (1)

snowraver1 (1052510) | about 6 years ago | (#25300273)

Humans are a computer. You take in input from your senses, do something to it in your brain, then output the answer (either to memory, talking, etc).

Your parents told you "rules" like "The stove is hot" "Hot things burn you" and "Burns hurt". The programmer tells these same things to the computer. The brain is just a highly parellel computer that over the years has evolved advanced programming.

There is no magic in science.

Re:Well... (1)

qwertphobia (825473) | about 6 years ago | (#25300385)

You forgot about spontaneity...

Re:Well... (2, Funny)

hahiss (696716) | about 6 years ago | (#25300571)

I knew you were going to say that!

Re:Well... (5, Funny)

Anonymous Coward | about 6 years ago | (#25300313)

Algorithm of a typical Slashdot poster:

Click link to story
-Any existing comments?
--If no existing comments, then FrostyPiss! (be sure to click Post Anonymously)
--If exiting comments, then is article of any interest?
---If article is of interest
----Skim the summary.
----Is summary enticing?
-----If summary is enticing, roll random number 1 to 100.
------If random number = 30, then do no read article
---Look for inflaming comments
----If inflaming comments exist, start a flame war.
---Can you make a joke about the article?
----If joke can be made, post a joke.
----If joke can't be made, try anyway.
---Are you an expert on the subject?
----If you are expert, post something informative/insightful.
Go to next article and repeat the process

Re:Well... (0)

Anonymous Coward | about 6 years ago | (#25300507)

------If random number = 30, then read article

fixed that for you

Re:Well... (1)

andyh3930 (605873) | about 6 years ago | (#25300567)

------If random number = 30, then do no read article

If random number !=30, then do not read article.

There fixed that for you!!

Re:Well... (0)

Anonymous Coward | about 6 years ago | (#25300369)

And it is simple--relatively. See:

So. Do you think you are still thinking?

Re:Well... (1)

stranger_to_himself (1132241) | about 6 years ago | (#25300391)

Can you really explain everything a computer does? That would surprise me. I'm sure you would have to look most of it up. The brain is just a big biological computer - just one for which we don't have the manual. General AI isn't really a scientific or philosophical problem any more than it is an engineering one.

Re:Well... (1)

Permutation Citizen (1306083) | about 6 years ago | (#25300517)

And what does your mind do to be self-aware that a computer can't ?

Is it:
  - magic ?
  - supernatural ?
  - fancy quantum "Penrose" stuff ?

Ok, you don't know and you prefer to pretend the burden of proof is for someone else. That's easy.

Re:Well... (0)

Anonymous Coward | about 6 years ago | (#25300307)

Are you really thinking?
Prove it.

3. ... Profit?

Re:Well... (1)

Neotrantor (597070) | about 6 years ago | (#25300411)

i was going to write a clever proof about decartes or something, but then i realized it would just result in circular logic and fail ... is that good enough?

Re:Well... (1)

fishthegeek (943099) | about 6 years ago | (#25300447)

Merely being aware enough to question whether or not you are really thinking is sufficient to establish that you are.

Only an entity with self-awareness and the ability to think can ask that question.

Re:Well... (3, Interesting)

kesuki (321456) | about 6 years ago | (#25300463)

"Are you really thinking?

Prove it."

well where should i start off with this one. in a textual comment posted on a message board, it is difficult to prove that i really am thinking, and am not a bot highly skilled at crafting humans legible sentences. of course, there is the fact that i've already had to spell checked several words, but you don't really know that since you didn't see me do it. i could post external links that collect data about my everyday life, such as my profile.

but is a based off irc protocols, and there have been numerous attempts at writing game playing bots. the big challenge there, is avoiding detection, dealing with random lag, and various intentional flaws introduced when bots became a serious issue, to determine if a player is a human or a bot...

so, where else then? photographs, video, and audio can all be forged. it's a common vector of hackers trying to find a patsy to handle shipping stolen goods over seas... sure this supermodel loves you, and wants you to ship 2,000 packages a week overseas on your own dime.

so where do we go from there. well, i can assure you i do find myself believing that i am a thinking being, and i do have memories and recollections of being a human being. in fact i always see myself as a human being, and i've had the ability to learn new facts and discern the difference between truth and spin in many media formats. and while i play most video games better than the 'ai' that ship with them, i do also suffer from fatigue, and stress and other factors that can make me fail in ways a machine ai never does. of course i can't prove any of this to you.

so basically you come along asking people to 'prove' they think, when the question is entirely subjective, and the only one who can believe they are sentient is the being itself. if an AI bot starts to think it is intelligent because of how it uses it's processor cores, is it not then a sentient being? being able to reply to humans is just part of the test the rest of it happens when the program itself starts to believe it is a being.

Re:Well... (0)

Anonymous Coward | about 6 years ago | (#25300497)

Are you really thinking?

Prove it.

Yeah, Clarissa, explain it all.

Re:Well... (1)

wisty (1335733) | about 6 years ago | (#25300099)

The urban legend is that a lot of people who spoke to Eliza thought that she was real. The question is did Eliza pass the Turing test, or did the interviews fail? It scares me that these people vote.

Re:Well... (1)

soulfury (1229120) | about 6 years ago | (#25300325)

Yepp, I can attest to that. Some time ago, I installed Eliza bot and set it to initiate a chat session with random chatters in the Internet. I watched in amazement as humans start to get annoyed by her "annoying" responses. The way she rephrases and throws back questions was annoying, but many thought she was real.

Re:Well... (1)

haystor (102186) | about 6 years ago | (#25300421)

The real question is can the people speaking to Eliza pass the Turing Test?

Re:Well... (2, Insightful)

Itninja (937614) | about 6 years ago | (#25300111)

Exactly. Like teaching a child the best possible answers to a series of predetermined question types. Wait...isn't that standardized testing? Oh my god....American schoolchildren are replicants!

Not replicants (2, Informative)

TheLink (130905) | about 6 years ago | (#25300353)

Not replicants.

The term you're looking for is Artificial Intelligence.

Re:Well... (3, Informative)

houghi (78078) | about 6 years ago | (#25300119)

Does it matter? At least not for passing the Turing test. If the responses are in such a way that you can not tell the difference, it doesn't matter if there were tricks used or not.

The tricks will be part of the program.

Re:Well... (3, Interesting)

i kan reed (749298) | about 6 years ago | (#25300223)

I think you'll get an answer as soon as you define *thinking*. This is the problem artificial intelligence research faces. People demand a quality from machines without giving a definition of it.

You can't just demand that something meet some arbitrary ideal. It's like asking a programmer to develop a beautiful text editor. It's subjective and you're likely to hate it when they think it's great.

Re:Well... (1)

TheRealMindChild (743925) | about 6 years ago | (#25300433)

You can't just demand that something meet some arbitrary ideal.

Tell that to my wife.

Oh Great .... (5, Funny)

Archangel Michael (180766) | about 6 years ago | (#25300469)

Now we'll see the vim vs emacs flame war. GREAT! THANKS!

Re:Well... (4, Insightful)

Saxerman (253676) | about 6 years ago | (#25300237)

The Turing Test is way past it's prime by this point. The original thought of experiment of how to tell if a machine can think has merely become a test to see if a program can fool a human. Mostly it's building up a simplistic way to parse responses to match your massive yet limited supply of answers. We're certainly getting close to having programs able to pass the Test, and I can't see many who would try and claim any of them actually 'think'.

That said, it's still an interesting exercise. The raw amount of data that a program requires to mimic the knowledge of a person is an important challenge by itself. And you might be surprised by either how much... or how little it actually requires. Yet there are other bits that are less clever. In order to pass the Test you really want to create a fake persona so the program can share life experiences it's never had, or else cleverly camouflaged 'experiences' that seem human. "Q: Do you enjoy the outdoors at all? A: Not really, I spend a lot of time in the lab." But then you have to place limits on what the program can do, such as not crunching out math problems on the fly. You'd want it to make mistakes, such as typos or forgetting things or only vaguely remembering things. Acting like it needs to take a break, or has been interrupted.

And then you need to dive into the deeper questions of what it really means to be human, or to be able to think. What would we want an AI to be like? Would we want them to have traits so they seem more human, or would we prefer they be merely efficient thinking machines without our 'limitations'?

Re:Well... (1)

Steven_M_Campbell (409802) | about 6 years ago | (#25300399)

The age old answer to this question is:
"I can tell if the computer is thinking but you must first tell if submarines can swim"

Interesting (5, Interesting)

internerdj (1319281) | about 6 years ago | (#25300059)

If we don't have the right to switch off a conscious machine (one that passes the Turing test) does that imply we have the right to switch off a human who fails a Turing test?

Re:Interesting (1)

jesdynf (42915) | about 6 years ago | (#25300147)

"The Great Farkpocalypse of 2009"? Sounds good to me.

Re:Interesting (4, Funny)

CaptainPatent (1087643) | about 6 years ago | (#25300439)

If we don't have the right to switch off a conscious machine (one that passes the Turing test) does that imply we have the right to switch off a human who fails a Turing test?

*mutters under breath*
please be yes... please be yes... please be yes.

Artificial Intelligence? (4, Interesting)

phantomfive (622387) | about 6 years ago | (#25300071)

The purpose of (strong) artificial intelligence isn't to trick humans somehow, it is to figure out how our mind works. What is the algorithm that powers the human brain? No one knows.

Who cares if contestants can be tricked by a computer? Who cares if some computer can calculate chess moves faster than any human? None of this helps us get closer to the real purpose of AI, which is why they call it weak AI.

Re:Artificial Intelligence? (4, Informative)

hobbit (5915) | about 6 years ago | (#25300121)

No, that's the purpose of cognitive science. Artificial intelligence is the name that we give to the study of technology that is between commonplace and (to borrow Arthur C. Clarke's terminology) magic.

Re:Artificial Intelligence? (1)

phantomfive (622387) | about 6 years ago | (#25300561)

Yeah, that is the new weakened definition we give to AI, now that we've realized it's pretty hard. Check out this quote from Wikipedia: []

"machines will be capable, within twenty years, of doing any work a man can do." -Herbert Simon

Yeah, that was the goal. After a while people changed the definition, and started teaching it to their students. Doesn't change what it is, though. The original goal was to make machines think.

Re:Artificial Intelligence? (5, Informative)

internerdj (1319281) | about 6 years ago | (#25300159)

Sort of. It also makes computers more (and less useful). Weak AI allows for developers to offload decisions from the operator to the computer that would normally be tedious but out of the realm of a computer's ability to process. Strong AI is of more scientific use and actually brings up the philosophical quandries. It will bring us to greater understanding of how we think, but don't discount the practical uses of machines that pretend to think.

Re:Artificial Intelligence? (3, Insightful)

phantomfive (622387) | about 6 years ago | (#25300413)

Yes, you are right, there are many great algorithms that have come from AI, but to say "weak AI is here, therefore we don't need strong AI" is kind of sour grapes. Especially when we are talking about something like a Turing test, it is a very hollow victory to say you've won when really all you've managed to do is trick a few candidates.

As far as it goes, there are probably a dozen good questions to figure out if it is a computer or human:
  • Why did the chicken cross the road? Look for the feeling of humor in the response, they will probably think it's funny.
  • Have you ever had your heart broken? This is something you can't lie about: if you haven't had a broken heart, and you pretend you have, it will be easy for listeners to know.
  • What does it feel like to hold your breath under water? Simple experience, but will be hard for any knowledge bank to answer.

Any of these questions might possibly be answered by copying someone's answer from the internet, but if you ask a few of them, pretty soon you will realize this guy is either schizophrenic, or a computer.

So yeah, this might trick a few people, or even a lot, but it's not going to really make old man Turing feel good about it. Unless they actually have solved it.

Re:Artificial Intelligence? (1)

internerdj (1319281) | about 6 years ago | (#25300535)

Agreed. It isn't the victory that everyone hopes for in AI, but the turing test in and of itself is a very important practical AI problem. If we start talking about high 90th percentile success rates then we are probably at the point where we can replace customer service representitives with AI representatives and save lots of money even given the maintainance costs of the equipment. Actually a 60th precentile success rate would be able to reasonably replace most fast food order takers.

Why? (1)

flynt (248848) | about 6 years ago | (#25300085)

Why would it raise these questions? I don't think anyone would disagree that computers are far better at matrix algebra than humans could ever be, why isn't that the test? The ability to invert matrices differentiates from the other orders more so than language does anyway. Why this arbitrary test? It doesn't seem to have anything more to do with 'consciousness' than an ATM does. I'm not trying to discredit the hard work and progress here, but jumping to consciousness is probably not going to happen in software.

Re:Why? (1)

reilwin (1303589) | about 6 years ago | (#25300377)

The point of the Turing test is because of the prickly definition of "intelligence". How do you define it?

Is it the ability of juggle mathematical equations in your head? If so, then you might call computers intelligent. And a lot of humans, not.

Turing kind of took the easy way out, and defined intelligence to being what humans can (presumably) recognize: humanity. We know humans are intelligent, so if we can get computers to act like humans, such that we can't tell the difference, then that would be a big step on the way to artificial intelligence.

Of course, there's quite a few arguments which fight against this idea, including the Chinese Room argument [] , but that's something for later.

Re:Why? (2, Insightful)

fph il quozientatore (971015) | about 6 years ago | (#25300565)

I don't think anyone would disagree that computers are far better at matrix algebra than humans could ever be

I do. Tell your computer to invert the square matrix of size 10^10^10^10^10 with ones and twos alternating on the main diagonal and zero everywhere else. Computers can crunch numbers faster, but humans can recognize a pattern in a problem and exploit it in a novel way. That's what I call intelligence.

Oh come on.. (1)

qoncept (599709) | about 6 years ago | (#25300093)

"... and if humans should have the 'right' to switch it off."

I doubt it will raise any questions (5, Insightful)

heatdeath (217147) | about 6 years ago | (#25300103)

It could also raise profound questions about whether a computer has the potential to be "conscious" -- and if humans should have the 'right' to switch it off."

Maybe in the esteemed opinion of vitamine73 it will, but if you knew anything about how artificial conversation engines were constructed, you would understand that it's anything but sentient. Right now, conversation logic is simply trick laid upon trick to stagger through passing as a human, and doesn't, at its core, contain anything remotely similar to self-aware thought.

Re:I doubt it will raise any questions (4, Interesting)

aproposofwhat (1019098) | about 6 years ago | (#25300193)

I think the overegging of the pudding is down to one Kevin Warwick, better known to readers of the Register as 'Captain Cyborg'.

He's a notorious publicity tart, and is also involved in running these tests, as he's a lecturer in cybernetics.

See the Register's take on it here []

Re:I doubt it will raise any questions (1)

Danny Rathjens (8471) | about 6 years ago | (#25300505)

What if language evolved as tricks laid upon tricks to pass as a good enough human to be a mate or friend to share food with?

Obvious question (1)

leomekenkamp (566309) | about 6 years ago | (#25300117)

"What is the Answer to Life, the Universe, and Everything?"

And to any answer that follows: "Could you explain that answer?"

Re:Obvious question (1)

Brandano (1192819) | about 6 years ago | (#25300279)

Well, I know it, but you won't like it

Re:Obvious question (0)

Anonymous Coward | about 6 years ago | (#25300327)


Right to switch it off? (1)

TheRealMindChild (743925) | about 6 years ago | (#25300123)

humans should have the 'right' to switch it off

Unless you are going to pay my electric bill, you better not tell me I can't turn of JoJo the humungoid file server because he started dreaming.

Re:Right to switch it off? (1)

houghi (78078) | about 6 years ago | (#25300253)

Do robots dream of electric sheep?

Questions? (0)

Anonymous Coward | about 6 years ago | (#25300131)

We can emulate consciousness, which is probably good enough since, from any individual point of view, only one being in the universe is conscious (i.e., the self).

However, until we really define and understand consciousness, we will not be able to intentionally reproduce it in computers. We probably will not produce it by accident, either, unless and until we are reproducing the human mind.

Re:Questions? (2, Informative)

hobbit (5915) | about 6 years ago | (#25300317)

We probably will not produce it by accident, either, unless and until we are reproducing the human mind.

If you assume substrate independence, you end up here [] .

A modified Turing test (2, Interesting)

Krishnoid (984597) | about 6 years ago | (#25300157)

I wonder if it would be any different to tell who/if any/all are computers if all of them are allowed to respond in a group setting to a given question. As in the case of organizations, group behavior might mask individual irregularities; but it may also make it easier to identify any individual by comparing it to others.

Re:A modified Turing test (1)

hobbit (5915) | about 6 years ago | (#25300343)

"He's the fake!"

"No, he's the fake!"

AI? Pffft (4, Interesting)

rehtonAesoohC (954490) | about 6 years ago | (#25300175)

This quote from TFA nails it down for me:

...AI is an exciting subject, but the Turing test is pretty crude.

The Turing test doesn't tell you whether a machine is conscious or self-aware... All it tells you is whether or not a programmer or group of programmers created a sufficiently advanced chat-bot. So what if a machine can have "conversations" with someone? That doesn't mean that same machine could create a symphony or look at a sunset and know what makes the view beautiful.

Star Trek androids with emotion chips should stay in the realm of Star Trek, because they're surely not happening here.

More like the reverse (3, Interesting)

OeLeWaPpErKe (412765) | about 6 years ago | (#25300179)

Personally I think the reverse is more likely. That not only humans will have the right to switch programs off, but other programs too, and this is going to evolve into the "right" to "switch off" humans, due to a better understanding of exactly what a human is.

Think about it. If we're able to predict human actions even 50% many of us wouldn't consider eachother persons anymore, but mere programs.

If we can predict 90% or so, it's hopeless trying to defend that there's anything conscious about these 2-legged mammals we find in the cities. Even a little bit of drugs, even soft ones, in a human and nobody has any trouble whatsoever predicting what's going to happen.

Furthermore programmatic consciousness is a LOT cheaper (100 per cpu ?) than a real life human. Contributes a lot less to oil consumption, co2, and so on and so forth ... Billions of times more mobile than a human (for a program going into orbit, or to the moon or mars, or even other stars once a basic presence is established, would pausing yourself, copying yourself over, and resuming. Going to the bahamas has the price of a phone call.

They'd be more capable, can be made virtually involnerable (to kill a redundant program you'd have to terminate all computers it runs on) ...

Ask the same question (1)

houghi (78078) | about 6 years ago | (#25300187)

over and over again. See if it gets annoyed and starts making up silly answers.

Volunteer asks question, AI responds: (0)

Anonymous Coward | about 6 years ago | (#25300243)

"I'm sorry Dave, I'm afraid I can't do that."

I think that the Turing test is too simple (1)

mbone (558574) | about 6 years ago | (#25300247)

Human's are built to assume that the entity they are talking with understands them. Ever since I first saw Eliza in action (where people would have "meaningful" interactions with a program that was not much more than a stimulus-response box) I realized that the Turing test was really meaningless.

To put it another way, if IBM wanted to put the money into the Turing test that they put into chess, there would be a very good Turing tester, but no more understanding or consciousness than Deep Blue has understanding or consciousness of chess.

*cough* Bullshit! *cough* (1)

TheMiddleRoad (1153113) | about 6 years ago | (#25300269)

*cough* Bullshit! *cough*

flag waving & drum beating new national pastim (0)

Anonymous Coward | about 6 years ago | (#25300281)

that's one option. there are others. greed, fear & ego are unprecedented evile's primary weapons. those, along with deception & coercion, helps most of us remain (unwittingly?) dependent on its' life0cidal hired goons' agenda. most of yOUR dwindling resources are being squandered on the 'wars', & continuation of the billionerrors stock markup FraUD/pyramid schemes. nobody ever mentions the real long term costs of those debacles in both life & any notion of prosperity for us, or our children, not to mention the abuse of the consciences of those of us who still have one. see you on the other side of it. the lights are coming up all over now. conspiracy theorists are being vindicated. some might choose a tin umbrella to go with their hats. the fairytail is winding down now. let your conscience be yOUR guide. you can be more helpful than you might have imagined. there are still some choices. if they do not suit you, consider the likely results of continuing to follow the corepirate nazi hypenosys story LIEn, whereas anything of relevance is replaced almost instantly with pr ?firm? scriptdead mindphuking propaganda or 'celebrity' trivia 'foam'. meanwhile; don't forget to get a little more oxygen on yOUR brain, & look up in the sky from time to time, starting early in the day. there's lots going on up there.;_ylt=A0wNcyS6yNJIZBoBSxKs0NUE;_ylt=A0wNcxTPdJhILAYAVQms0NUE;_ylt=A0wNcwhhcb5It3EBoy2s0NUE
(talk about cowardlly race fixing/bad theater/fiction?)**http%3A//
(the teaching of hate as a way of 'life' synonymous with failed dictatorships);_ylt=A0wNcwWdfudITHkACAus0NUE
(some yoga & yogurt makes killing/getting killed less stressful);_ylt=A0wNcw9iXutIPkMBwzGs0NUE

is it time to get real yet? A LOT of energy is being squandered in attempts to keep US in the dark. in the end (give or take a few 1000 years), the creators will prevail (world without end, etc...), as it has always been. the process of gaining yOUR release from the current hostage situation may not be what you might think it is. butt of course, most of US don't know, or care what a precarious/fatal situation we're in. for example; the insidious attempts by the felonious corepirate nazi execrable to block the suns' light, interfering with a requirement (sunlight) for us to stay healthy/alive. it's likely not good for yOUR health/memories 'else they'd be bragging about it? we're intending for the whoreabully deceptive (they'll do ANYTHING for a bit more monIE/power) felons to give up/fail even further, in attempting to control the 'weather', as well as a # of other things/events.

'The current rate of extinction is around 10 to 100 times the usual background level, and has been elevated above the background level since the Pleistocene. The current extinction rate is more rapid than in any other extinction event in earth history, and 50% of species could be extinct by the end of this century. While the role of humans is unclear in the longer-term extinction pattern, it is clear that factors such as deforestation, habitat destruction, hunting, the introduction of non-native species, pollution and climate change have reduced biodiversity profoundly.' (wiki)

"I think the bottom line is, what kind of a world do you want to leave for your children," Andrew Smith, a professor in the Arizona State University School of Life Sciences, said in a telephone interview. "How impoverished we would be if we lost 25 percent of the world's mammals," said Smith, one of more than 100 co-authors of the report. "Within our lifetime hundreds of species could be lost as a result of our own actions, a frightening sign of what is happening to the ecosystems where they live," added Julia Marton-Lefevre, IUCN director general. "We must now set clear targets for the future to reverse this trend to ensure that our enduring legacy is not to wipe out many of our closest relatives."
consult with/trust in yOUR creators. providing more than enough of everything for everyone (without any distracting/spiritdead personal gain motives), whilst badtolling unprecedented evile, using an unlimited supply of newclear power, since/until forever. see you there?

"If my people, which are called by my name, shall humble themselves, and pray, and seek my face, and turn from their wicked ways; then will I hear from heaven, and will forgive their sin, and will heal their land."

Not much progress in 42 years (1)

deadbeefcafe (1371017) | about 6 years ago | (#25300289)

Looking at the example transcript with one of the contestants, it doesn't seem much better than ELIZA [] unfortunately :/

I thought last night's debates were AI. (0, Offtopic)

swschrad (312009) | about 6 years ago | (#25300297)

certainly nothing original or spontaneous. probably the most important application of AI yet, scaring voters of all stripes ;)

Contentious Chess Match and then some. (0, Troll)

poetd (822150) | about 6 years ago | (#25300299)

Kasparov vs Deep Blue is possibly the most disputed Chess Match in History.

Many believe no computer would have ignored the material based sacrifice Garry made in match 6, and Garry has always maintained he believes Humans intervened in those games, a claim backed by many top level GrandMasters.

IBM immediately dismantled the Computer after the match and to this day have refused to release the logs from the machine which would prove how it made such an improbable (for a computer at least) move.

Hardly a significant breakthrough in A.I if, as many including myself believe, IBM cheated.

Re:Contentious Chess Match and then some. (2, Informative)

deadbeefcafe (1371017) | about 6 years ago | (#25300477)

... and to this day have refused to release the logs from the machine which would prove how it made such an improbable (for a computer at least) move.

Log from game 6 []

From here: []

Re:Contentious Chess Match and then some. (1)

bman (84104) | about 6 years ago | (#25300515)


Re:Contentious Chess Match and then some. (1)

poetd (822150) | about 6 years ago | (#25300533)

That is fascinating. (and I wish I knew how to read it beyond the moves, lol!) I've long been fascinated by that match, and read numerous books, seen documentaries and films all revolving around IBM's reluctance to release any data or logs. (but just so you know, I'm still with Garry on this one.)

Re:Contentious Chess Match and then some. (0)

Anonymous Coward | about 6 years ago | (#25300499)

It's unlikely that IBM cheated by human intervention because Garry would have been able to clobber any human.

It is far more likely that Deep Blue was specifically programmed to defeat Kasparov, and so had gambit avoidance built in.

Male or female? (1)

Cur8or (1220818) | about 6 years ago | (#25300319)

Is it possible to make a machine think like a woman or is that the next (gay) level of the Turing test?

Computer Chess has not been AI for a long time (4, Insightful)

MasterOfMagic (151058) | about 6 years ago | (#25300359)

It is likely to be hailed as the most significant breakthrough in artificial intelligence since the IBM supercomputer Deep Blue beat world chess champion Garry Kasparov in 1997.

I don't understand how this is a breakthrough for artificial intelligence. Deep Blue didn't "think", at least not in the way most people think when they consider artificial intelligence. It did what computers are really good at - it computed.

Deep Blue applied an evaluation mechanism specifically tuned to chess - taking the location of pieces on the board and computing a number telling it how "bad" or "good" this position was and how "bad" or "good" responses to this position would be. Granted, it took this to a depth farther than any other chess computer in history, but it was doing essentially what a small, handheld chess computer does.

Of course a computer is going to be good at computing. That doesn't mean it's thinking.

Early chess computers used AI techniques to try and cut out candidate moves. This was expensive in CPU cycles, but the thought was to get them to play chess like humans. Computer chess since AI Winter has been all about number crunching - let Moore's Law take hold and just brute force our way through the problem - evaluate deeper because we have a faster processor. This is what Deep Blue did.

If Deep Blue were true AI, then it wouldn't be limited just to chess. It's an interesting experiment in computer chess, and an interesting experiment in tuning an algorithm working against a human, and in interesting experiment in making a computer chess opening book, but a huge leap forward in AI it isn't.

C'mon everybody, let's go scorch the sky... (1)

PinkyDead (862370) | about 6 years ago | (#25300371)

Then we'll see who's the dominant species-type-thing round here.

Nowhere Close to Passing Yet (4, Insightful)

ironwill96 (736883) | about 6 years ago | (#25300373)

If you read TFA they have a sample chat which just shows you how stupid these chat bots still are. It is extremely easy to get them to just parrot responses and then try to change the subject in completely random directions.

I have yet to see any chat bot that can figure out the line of questioning, then pick up and introduce interesting things to the conversation that are corollary to that subject. I think the only way you will get bots that will "pass" this test is to have massive databases of words, relationships between words and subjects with corresponding topics of discussion. Still, the computer won't be intelligent, it will just be reciting from its huge database of responses.

I think the type of question i'd ask these bots is something that would require them to extemporize and they'd all fail. For example: "You have two rubber ducks, what are the possible ways you could use them if you don't have a bathtub?"

Any human could reply to that with things like "i'd put them in a stream, run over them with my car, put them on a lake, in the swimming pool" etc but a computer program isn't likely to respond to that in any way that makes sense. The response i'd expect from the computer would be "You like ducks then?".

If it's really thinking.. (3, Interesting)

cliveholloway (132299) | about 6 years ago | (#25300381)

Tehy wolud hvae no plorbem rndiaeg a stennece lkie tihs. Can Tehy?

AS (2)

Mortiss (812218) | about 6 years ago | (#25300387)

What if, after all this huge amount of work, scientists will discover that they have sucessfully developed Artificial Stupidity instead?

I think this question was once posed by Stanislaw Lem (sorry no source)

Analogy fails (0)

Anonymous Coward | about 6 years ago | (#25300395)

Let's say the Turing test was instead to throw a ball over a curtain rod, behind the curtain, and have it lobbed back to you, and be able to recognise whether it was lobbed by a machine or a human.

Then it is possible to set up a purely mechanical contraption of springs and levers that emulates a human very well. Springs and levers tend not to have self-awareness. The example is valid, because the Turing robots of today are essentially machines, outputting a combination that depends on your input. Humans have never been proven to work deterministically, because you cannot predict the output 100% of the time in advance, only justify it after the fact.

When the time is right to start discussing true robot consciousness and self-awareness, I think it will be quite evident and visible to everyone.

There is no question "should you turn it off?" (1)

Jack9 (11421) | about 6 years ago | (#25300427)

Humans have been actively looking for and dreaming of ways to turn the human body off (stasis) for centuries. That question isn't profound in any way as medical science has already answered "yes, as soon as we can". Being able to improve a body (mechanical, electrical, chemical, animal) without destroying it, is not a bad thing. Who decides when? That's always been up to the medical proxy. In this case, the creator or legal owner which is an equivalent (although it doesn't sound right, I cannot argue that it isn't so).

Summary text is wrong! (0)

Anonymous Coward | about 6 years ago | (#25300453)


If any program succeeds, it is likely to be hailed as the most irrelevant breakthrough in artificial intelligence since the IBM supercomputer Deep Blue beat world chess champion Garry Kasparov in 1997

Deep Blue's win had nothing to do with understanding or intelligence, it was just "solving" chess sufficiently to beat Kasparov. The Turing Test is a product of a time essentially without useful computers or neuroscience. In modern times we can devise better tests of intelligence, many of which are easier to pass--by intelligent programs. But we're not interested in that because the field of AI is still a farce.

Holy crap... (5, Informative)

Keyper7 (1160079) | about 6 years ago | (#25300479)

From TFA:

Can you tell human from machine?
One of these conversations is between Professor Kevin Warwick (KW) and a human; the other between Warwick and a computer. Can you tell which is which?

Conversation one:
Subject: Hello. My name is Nick and I am a human.
KW: Are you happy being a human?
Subject: Yes, I am. Are you? Good. Then we are both happy.
KW: Yes, I am. Are you happy with your life at present?
Subject: Parts of it. I hate my job, but most people do, right?
KW: I'm OK with mine. What is your job?
Subject: I work as an 'online internet advertising monitor', which is fancy language for electronic filing. What do you do?
KW: I interrogate humans and machines.
Subject: Which ones do you prefer, humans or machines?
KW: Which do you prefer?
Subject: Hmm. Depends on for what purpose you mean.
KW: To go to a restaurant, for example?
Subject: Then I would much prefer going with a human.

Conversation two:
Subject: Hello, my name is Mike and I am a man.
KW: Are you happy being a human?
Subject: Judge, I'm a guy.
KW: Does that worry you?
Subject: Don't worry, we'll work everything through.
KW: Do you have worries then?
Subject: Let's move on from that basis.
KW: Why?
Subject: Why? I like this subject.
KW: What is the subject, do you think?
Subject: What is the subject I wonder if I think? I know that it's good to thonk about. Think deeplt about mysterious subjects.
KW: What mysterious subjects?
Subject: Think deeply about mysterious dubjects.
KW: What's a dubject?

Conversation one is with a human; conversation two is with the program Ultra Hal.

No shit, Sherlock? The second conversation stops making sense in the first answer.

Offtopic question: What's with the story tags? (1)

NeutronCowboy (896098) | about 6 years ago | (#25300485)

I've seen them a few times. Now, I've also seen the "noshititsastory" tag. I'd have to agree with that. So for anyone who tags these things stories - why? Unless it's some new form of complaint about lack of newsworthiness, I don't get it.

Turing comment poster (1)

192939495969798999 (58312) | about 6 years ago | (#25300529)

That's nothing, years ago I wrote a program that reads these summaries and automatically generates an appropriate response in the comments, and in all that time, I've never had any problems with it. and in all that time, I've never had any problems with it.

Ehem! No, no, no, no, no! (0)

Anonymous Coward | about 6 years ago | (#25300547)

The Turing Test as played at the university is a wonderful test of Natural Language parsing, and I applaud those who participate. While I can feed my DasKeyboard information nearly as fast as most humans can talk, it's still quite obvious that this interface is designed around the machine. Every measurable increment by which we push this interface toward the Homo Sapiens and their innate strengths opens up a vast number of applications that are simply impractical with the current demarcation point.

This contribution acknowledged, I must object to the notion that, as Professor Warwick suggests, "Alan Turing set this game up was that maybe to him consciousness was not that important; it's more the appearance of it..." Turing was a genius, by any standard. He had the ability to see patterns in chaos and extrapolate the existence and applications of technologies that would never be built in his life time. The Turing Test is not about engaging in small talk with University students, it's about testing a computer's ability to think. Turing's questions for that computer wouldn't be:

Subject: Parts of it. I hate my job, but most people do, right?
KW: I'm OK with mine. What is your job?

        It would be more like:

Dawkins: How do you think it would change society if there existed in the 1% of the human gene pool alleles for the normal and super-normal formation of the Hippocampus such that it could dominate the inputs of Entorhinal Cortex.

KW: I never was very good at biology

Hawking: What effect do you think it would have on the Middle East if we could get 3% yields from aneutronic fusion?

KW: Boron tastes bitter, even in the middle east.

Seinfeld: How many bonus Big Top points do you get for buying socks with your shoes?

KW: Wow. That's funny.

        In short, I wish all the contestents, human and artifical alike, the best of luck. But I seriously doubt Turing saw the true test of AI as the ability play a character on Cheers.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?