Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Lovelace Test Is Better Than the Turing Test At Detecting AI

samzenpus posted about 4 months ago | from the why-did-you-program-me-to-feel-pain? dept.

AI 285

meghan elizabeth writes If the Turing Test can be fooled by common trickery, it's time to consider we need a new standard. The Lovelace Test is designed to be more rigorous, testing for true machine cognition. An intelligent computer passes the Lovelace Test only if it originates a "program" that it was not engineered to produce. The new program—it could be an idea, a novel, a piece of music, anything—can't be a hardware fluke. The machine's designers must not be able to explain how their original code led to this new program. In short, to pass the Lovelace Test a computer has to create something original, all by itself.

Sorry! There are no comments related to the filter you selected.

Lovelace? (-1)

Anonymous Coward | about 4 months ago | (#47421473)

Why is it called the Lovelace test? Ada Lovelace was just someone that translated a book for the worlds first programmer.

Re:Lovelace? (5, Funny)

I'm just joshin (633449) | about 4 months ago | (#47421481)

Maybe they mean the "Linda Lovelace" test?

Re:Lovelace? (2, Funny)

Anonymous Coward | about 4 months ago | (#47421537)

if a human cannot determine if they just got a hummer from a machine or another human?

Re:Lovelace? (1)

Noah Haders (3621429) | about 4 months ago | (#47421667)

that's what I assumed it to be.

Re:Lovelace? (4, Funny)

TWX (665546) | about 4 months ago | (#47422067)

if a human cannot determine if they just got a hummer from a machine or another human?

Gives a whole new meaning to, "My computer went down on me..."

Re:Lovelace? (0)

DaveAtFraud (460127) | about 4 months ago | (#47421725)

Maybe they mean the "Linda Lovelace" test?

Unfortunately, she's dead. Doesn't take much intelligence to be dead.

Cheers,
Dave

Re:Lovelace? (-1)

Anonymous Coward | about 4 months ago | (#47422037)

Fortunately, she's dead. Unfortunately, you're not.

The less humans on this planet, the better.

Re:Lovelace? (1)

TWX (665546) | about 4 months ago | (#47422077)

You could add to your solution. There's exactly one person that you could kill with a guarantee of facing no legal repercussions for the act...

Re:Lovelace? (0)

Anonymous Coward | about 4 months ago | (#47422121)

Unless of course you fail at the attempt.

Re:Lovelace? (2)

seven of five (578993) | about 4 months ago | (#47421835)

Just give me the blow by blow account.

Re:Lovelace? (2)

Horshu (2754893) | about 4 months ago | (#47422005)

That's deep, man.

Turing test not passed. (-1, Troll)

HornWumpus (783565) | about 4 months ago | (#47421477)

Slashdot sucks, Dice sucks, kids these days, get off my lawn.

Re:Turing test not passed. (-1)

Anonymous Coward | about 4 months ago | (#47421561)

That's because they keep shifting the goalposts.

Re:Turing test not passed. (5, Informative)

ShanghaiBill (739463) | about 4 months ago | (#47421791)

That's because they keep shifting the goalposts.

They are shifting them again. This new test includes this requirement: The machine's designers must not be able to explain how their original code led to this new program. So now anything we understand is not intelligence??? So if someone figures out how the brain works, and is about to describe its function, then people will no longer be intelligent? Intelligence is a characteristic of behavior. If it behaves intelligently, then it is intelligent. The underlying mechanism should be irrelevant.

philosophical discussion only not science (2, Interesting)

globaljustin (574257) | about 4 months ago | (#47422099)

So if someone figures out how the brain works, and is about to describe its function, then people will no longer be intelligent? Intelligence is a characteristic of behavior. If it behaves intelligently, then it is intelligent. The underlying mechanism should be irrelevant.

No.

you describe "behaviorism" which is a thoroughly discredited and reductive theory

the ***whole conversation*** is about ***the underlying mechanism***

the "Lovelace Test" is more rigorous, but how it will affect computing I cannot say, because the Turing Test itself is a time-wasting notion.

the problem: questions of "what is intelligence" are Philosophy 101 questions...not scientific or computing questions...and we hurt our industry when we overlap the two

just because we can prod a human to make them do something, or dose them with a chemical or whathaveyou, doesn't mean we have disproven the existence of "free will"

we will map every neural connection in the human brain soon, this doesn't mean all humans will become remote controlled techno-zombies

people take other's freedom by many means:
by gunpoint
emotional manipulation
through blackmail
too much alchohol
the Frey Effect [slashdot.org]
threats of loss of work

so learning how neurons work is just another potential addition to that list

the point: humans have free will and it can be subverted in many ways, this does not have any implications in computing

Re:Turing test not passed. (2)

TapeCutter (624760) | about 4 months ago | (#47422149)

So now anything we understand is not intelligence?

I heard a great anecdote about this from an MIT proffessor on youtube [youtube.com] . Back in the 80's the professor developed an AI program that could translate equations into the handful of standard forms required by calculus and solve them. A student heard about this and went calling to see the program in action. The professor spent an hour explaining the algorithm, when the student finally understood he exclaimed, "That's not intelligent, it's doing calculus the same way I do".

It could be argued that neither the student nor the computer were intelligent since they were simply following rules, but if that's the case the only those handful of mathematicians who discovered the standard form are intelligent. It should also be noted that since that time computers routinely discover previously unknown mathematical truths by brute force extrapolation of the basic axioms of mathematics, however none of them have been particularly useful for humans.

When people dispute the existence of AI what they are really disputing is the existence of artificial consciousness, we simply don't know if a computer operating a complex algorithm is conscious and quite frankly it's irrelevant to the question of intelligence. For example most people who have studied ants agree an ants nest displays highly intelligent behaviour, they have evolved a more efficient and generally better optimised solution to the travelling salesman problem than human mathematics (or intuition) can provide, yet few (if any) people would argue that an ant or it's nest is a conscious being.

Re:Turing test not passed. (4, Interesting)

phantomfive (622387) | about 4 months ago | (#47422057)

That's because they keep shifting the goalposts.

I don't think "a chatbot isn't AI and hasn't been since the 1960s when they were invented, whether you call it a doctor or a Ukrainian kid doesn't make any difference" counts as shifting the goalposts.

Furthermore, reproducible results are an important part of science. Let him release his source code, or explain his algorithm so we can reproduce it. Anything less is not science.

Re:Turing test not passed. (2, Insightful)

Anonymous Coward | about 4 months ago | (#47421565)

It was passed as defined: 10 out of 30 judges (lay people) thought they were talking with a human when they were talking with a machine in 5 minute chat sessions. Whether passing this is any way significant is up for debate, but the test was passed.

Re:Turing test not passed. (-1)

Anonymous Coward | about 4 months ago | (#47421579)

you think it was passed because you're a retard

Re:Turing test not passed. (0)

Anonymous Coward | about 4 months ago | (#47421603)

The criteria of the test were defined, the criteria of the test was passed. Please share your superior intellect and explain to my poor retard self how it has not passed.

Re:Turing test not passed. (0)

Anonymous Coward | about 4 months ago | (#47421799)

were they REALLY fooled, or compelled to lie?

i ALWAYS lie when asked to participate in a survey. WHY NOT?

AC and Turing Test (1)

TWX (665546) | about 4 months ago | (#47422089)

Sometimes I wonder if all of the ACs are simply one bot with the electronic equivalent of schizophrenia talking to itself...

Re:Turing test not passed. (4, Informative)

sjames (1099) | about 4 months ago | (#47421885)

Alas, the test that was "passed" was not actually the test Turing proposed.

So it passed the Turingish test.

Re:Turing test not passed. (-1)

Anonymous Coward | about 4 months ago | (#47422097)

You think it wasn't because you have a Turing-shaped dildo up your bum.

Sorry, your hero wasn't some magic nerd capable of seeing beyond the veil of space and time.

Re:Turing test not passed. (5, Insightful)

nmb3000 (741169) | about 4 months ago | (#47421647)

It was passed as defined

The Turing Test was not passed, and the only people who claim it was are ignorant reporters looking for an easy story with a catchy headline and tech morons who also believe Kevin Warwick is a cyborg.

The test was rigged in every way possible:

- judges told they were talking to a child
- that doesn't speak English as a primary language
- which was programmed with the express intent of misdirection
- and only "fooled" 30% of the judges.

And, even after all that, Cleverbot [cleverbot.com] did a much better job back in 2011 with a 60% success rate.

This Eugene test outcome was a complete farce -- something to remind everyone that Warwick still exists and to separate the ignorant and sensational tech news trash rags from the more legitimate sources of information.

Re:Turing test not passed. (1)

Anonymous Coward | about 4 months ago | (#47421923)

The point of passing a Turing test is that a Human will not know or believe that they are talking to a computer. Even IBM's Watson could satisfy this if you expressly told the Humans they were talking to another Human. By saying they are talking to a human without given the option to evaluate that as a possibility, people will just assume that information was correct if they had no other reason to believe otherwise.

There was some Honesty metric applied, then No computer could ever pass the Turing Test because simply asking how old it is, or what it's name is would give the tester all the ammo they need to know it's not a real person. An AI doesn't know how old it is, it just exists. Similar to questions about appearances.

They should have double-blind tested by putting all the humans in the room and an equal number of computers, Make two or four of these computers the AI test, the rest are talking to some other person in the room. Tell them they are testing software and their only requirement is that they not speak out loud or make Eye contact with the other people in the room. To make things more honest, use cubicle-style partitions. After some testing rotate the chat partners. At the end of the test, identify who they talked to in the room.

As for the Lovelace test. An AI that can build something original sounds more like a SkyNET test. All an AI has to do is generate something original without that input. Like if you look at existing build scripts, there is a lot of "dumb AI" going on, and by definition it creates something original, but it has input. Likewise Miku software can generate original music, but that is still from input. It needs to create something original without input to transform. Like having a computer generate a photo that a human can recognize as a human/plant/animal without being told how. Like the computer can look at 10,000 pictures of rabbits, but it's not permitted to simply copy any one of those rabbits, what it must draw must look like a rabbit.

Re:Turing test not passed. (5, Informative)

AthanasiusKircher (1333179) | about 4 months ago | (#47422115)

It was passed as defined

The Turing Test was not passed, and the only people who claim it was are ignorant reporters looking for an easy story with a catchy headline

Indeed. There's a lot of misinformation out there about what Turing originally specified. The test is NOT simply "Can a computer have a reasonable conversation with an unsuspecting human so that the human will not figure out that the computer is not human?" By that standard, ELIZA passed the Turing test many decades ago.

The test also doesn't have a some sort of magical "fool 30%" threshold -- Turing simply speculated that by the year 2000, AI would have progressed enough that it could fool 30% of "interrogators" (more on that term below). The 30% is NOT a threshold for passing the test -- it was just a statement by Turing about how often AI would pass the test by the year 2000.

So what was the test?

The test involves three entities: an "interrogator," a computer, and a normal human responder. The interrogator is assumed to be well-educated and familiar with the nature of the test. The interrogator has five minutes to question both the computer and the normal human in order to determine which is the actual human. The interrogator is assumed to bring an intelligent skepticism to the test -- the standard is not just trying to have a normal conversation, but instead the interrogator would actively probe the intelligence of the AI and the human, designing queries which would find even small flaws or inconsistencies that would suggest the lack of complex cognitive understanding.

Turing's article actually gives an example of the type of dialogue the interrogator should try -- it involves a relatively high-level debate about a Shakespearean sonnet. The interrogator questions the AI about the meaning of the sonnet and tries to identify whether the AI can evaluate the interrogator's suggestions on substituting new words or phrases into the poem. The AI is supposed to detect various types of errors requiring considerable fluency in English and creativity -- like recognizing that a suggested change in the poem wouldn't fit the meter, or ir wouldn't be idiomatic English, or the meaning would make an inappropriate metaphor in the context of the poem.

THAT'S the sort of "intelligence" Turing was envisioning. The "interrogator" would have these complex discussions with both the AI and the human, and then render a verdict.

Now, compare that to the situation in TFS where the claim is that the Turing test was "passed" by a chatbot fooling people. That's crap. The chatbot in question, as parent noted, was not even fluent in the language of the interrogator, it was deliberately evasive and nonresponsive (instead of Turing's example of AI's and humans having willing debates with the interrogator), there was no human to compare the chatbot to, the interrogators were apparently not asking probing questions to determine the nature of the "intelligence" (and it's not even clear whether the interrogators knew what their role was, the nature of the test, whether they might be chatting with AI, etc.).

Thus, Turing's test -- as originally described -- was nowhere close to "passed." Today's chatbots can't even carry on a normal small-talk discussion for 30 seconds with a probing interrogator without sounding stupid, evasive, non-responsive, mentally ill, and/or making incredibly ridiculous errors in common idiomatic English.

In contrast, Turing was predicting that interrogators would have to be debating artistic substitutions of idiomatic and metaphorical English usage in Shakespeare's sonnets to differentiate a computer from a real (presumably quite intelligent) human by the year 2000. In effect, Turing seemed to assume that he would talk to the AI in the way he might debate things with a rather intelligent peer or colleague.

Turing was wrong about his predictions. But that doesn't mean his test is invalid -- to the contrary, his standard was so ridiculously high that we are nowhere close to having AI that could pass it.

dwarf fortress (4, Insightful)

Anonymous Coward | about 4 months ago | (#47421493)

That is all.

Re:dwarf fortress (1)

Horshu (2754893) | about 4 months ago | (#47422013)

I first heard about it this morning on this site. Holy crap! I'm tempted to try it but am almost afraid to try it.

Most humans couldn't pass that test (4, Insightful)

voss (52565) | about 4 months ago | (#47421499)

When was the last time the average person created something original?

Re:Most humans couldn't pass that test (1)

retchdog (1319261) | about 4 months ago | (#47421519)

and there are quite a few human pairs for which one would not be able to convince the other that they were speaking intelligibly, either.

it is irrelevant. it is only necessary for one computer (however that's defined) to pass this test. i don't see how it's really any better than Turing though. it's a nice idea, it seems even more vague than the Turing test.

Re:Most humans couldn't pass that test (2)

TWX (665546) | about 4 months ago | (#47422105)

Has anyone really been far even as decided to use even go want to do look more like?

Re:Most humans couldn't pass that test (1)

Lisias (447563) | about 4 months ago | (#47421531)

People usually make the big mistake of taking himselfs as measure for everybody else.

Turing was a hell of a smart guy - I bet my mouse that he had this mindset ("everybody is more or less smart as me") when he designed that Test.

By the way, there's a joke around here that states: The sum of all Q.I. in the Earth is a constant - and the population is growing...

There's more instructed people nowadays, but smart? I'm afraid that not - Turing didn't live to see what we are nowadays.

Re:Most humans couldn't pass that test (0)

Anonymous Coward | about 4 months ago | (#47421761)

I don't necessarily agree with your statement. I have 'invented' many things throughout my years, but very few are original ideas. That does not mean I have not thought of something "new." I have, without knowledge of what others were doing or prior directed reading, 'invented' directional sound projection devices, the idea of scarcity, the terrible concept of roller wheels in shoes, Newtonian-fluid body armor, dropping rods of metal from space as a weapon, etc. Just because I was not at the cutting edge does not mean that new concepts don't exist....unless, of course, you are talking about literature, in which case, yeah, there hasn't been anything new since Sophocles, and even that is most likely just because of documentation. Point is, a computer doesn't have to create a "NEW" idea, only one which it couldn't have thought up through it's direct programming and design. Good idea, bad idea, it shouldn't matter. My personal opinion: emotion is a necessary requirement, along with logic, for creativity and thus A.I.

Re:Most humans couldn't pass that test (1)

jimmydevice (699057) | about 4 months ago | (#47421969)

If it came up with the most efficient / fastest sort and search algorithms I might be impressed. It's still not intelligence.

Re:Most humans couldn't pass that test (1)

mi (197448) | about 4 months ago | (#47421555)

Well, it does happen every day here and there. But there are a lot of people, who never manage to — throughout their whole lives... And I'm not even sure about myself, unfortunately.

Re:Most humans couldn't pass that test (0)

Anonymous Coward | about 4 months ago | (#47421563)

Not to mention being original or not is a bad test of something being "good". From art to science humans build on the work of those who came before. The challenge is picking good stuff to incorporate into the new stuff. Humans are sort of a "culture filter" constantly combining good ideas and discarding bad ones. A human with no education is more likely to come up with an "original idea" than someone with a classical education, and the resulting "original idea" will probably be a bad idea.

Re:Most humans couldn't pass that test (0)

Anonymous Coward | about 4 months ago | (#47421645)

That last line of the summary isn't a great wrap up -- its not something original needs to be produced, its that you cannot have an explanation for where it came from. When I ask you to hum 3 notes, you may choose part of a song or not produce something original but your reason for choosing those 3 notes was because its what came to mind. If a program is designed to choose 3 random notes when asked to hum, that's not AI because you can explain why. Until the designers only explanation is "because you asked it to" it doesn't pass the test regardless of originality.

Re:Most humans couldn't pass that test (3, Interesting)

sg_oneill (159032) | about 4 months ago | (#47421823)

When was the last time the average person created something original?

Probably every day, BUT it does go to the point with this one. We're still trying to recreate an idealized human rather than actually focusing on what intelligence is.

My cat is undeniably intelligent, almost certainly sentient although probably not particularly sapient. She works out things for herself and regularly astonishes me with the stuff she works out, and her absolute cunning when when she's hunting mice. In fact having recently worked out I get unhappy when she brings mice from outside the house and into my room, she now brings them into the back-room and leaves them in her food bowl, despite me never having told her that that would be an accepatble place for her to place her snacks.

But has she created an original work? Well no, other than perhaps artfully diabolical new ways to smash mice. But thats something she's programmed to do. She is, after all, a cat.

She'd fail the test, but she's probably vastly more intelligent in the properly philosophical meaning of the term, than any machine devised to date.

The machine's designers? (1)

Anonymous Coward | about 4 months ago | (#47421509)

Shouldn't it be "nobody can be able to explain how their original code led to this new program", instead of
  "The machine's designers must not be able to explain how their original code led to this new program"?

Lovelace Test (1)

Anonymous Coward | about 4 months ago | (#47421511)

...you keep using that test...I do not think it tests what you think it does....

Lovelace Test (0)

Anonymous Coward | about 4 months ago | (#47421739)

You say that as if you are a machine regurgitating prescribed one-liners.

Re:Lovelace Test (1)

sjames (1099) | about 4 months ago | (#47421899)

How do regurgitated one-lines make you feel?

Re:Lovelace Test (0)

Anonymous Coward | about 4 months ago | (#47421997)

squishy

Humans don't create original work (0)

Anonymous Coward | about 4 months ago | (#47421529)

We observe nature and imitate. Unless opium is involved in creating a randomization of that observation.

Wild Wild West or Deep Throat? (-1)

Anonymous Coward | about 4 months ago | (#47421533)

I ponder the choices.

Absurd (2)

mark-t (151149) | about 4 months ago | (#47421547)

The machine's designers must not be able to explain how their original code led to this new program

That is a flatly ludicrous requirement, far in excess of what we would ever even consider applying to determine if even a human being is intelligent or not. Hell, if you were to apply that standard to human beings, ironically, many extremely intelligent people would fail that metric, because in hindsight, you can very often identify precisely how a particular thought or idea came out of a person.

Re:Absurd (5, Funny)

Roger W Moore (538166) | about 4 months ago | (#47421605)

Agreed - there is no reason to require the program be written in perl.

Re:Absurd (1)

khallow (566160) | about 4 months ago | (#47421821)

The machine's designers must not be able to explain how their original code led to this new program

That is a flatly ludicrous requirement

Why do you think that? I guess we need actual examples.

Re:Absurd (1)

Livius (318358) | about 4 months ago | (#47421979)

And if it's declared intelligent, and then someone figures out how to explain how it came up with whatever the original content is, then does it just become less intelligent?

The Turing Test has NOT been PASSED! (1)

IQGQNAU (643228) | about 4 months ago | (#47421557)

The only people fooled by Goostman's PR BS are the press and their gullible readers.

Turing Test Today and in the Future (0)

Anonymous Coward | about 4 months ago | (#47421577)

artificial-intelligence.com/comic/4 [artificial...igence.com]

How many humans can truly pass this test? (0)

Anonymous Coward | about 4 months ago | (#47421601)

How many humans can truly pass this test?

Mostly we're 7 billion 'somewhat intelligent' monkeys combinitorally iterating over ideas and observations that have become cultural knowledge.

I could go for some rigorous Lovelace testing (-1)

Anonymous Coward | about 4 months ago | (#47421611)

As in "Deep Throat"

Evolutionary algorithms (3, Insightful)

The Evil Atheist (2484676) | about 4 months ago | (#47421615)

I do recall reading a while back experiments done with AI in which programs compete for resources by generating programs to do tasks given to it (computing sums etc). Some programs did generate code that were completely unexpected.

It raises the question programs that are evolved are designed by the programmer or the program, or the process of evolution. And it also raises the philosophical question about whether we should be more humble and accept that our "creativity" that we think is what makes humans intelligent could be nothing more than a process of the evolution of ideas (I hesitate to use the word meme) that we don't actually originate nor control.

If we consider programs that can create things through evolution as "intelligent", that would ironically make natural selection intelligent, since DNA is a digital program that is evolved into complex things over time that can't be reduced to first principles.

Goal Post: Mysticism (5, Insightful)

Altanar (56809) | about 4 months ago | (#47421621)

The machine's designers must not be able to explain how their original code led to this new program.

Whoa, whoa, whoa. I have severe problem with this. This is like looking at obscurity and declaring it a soul. The measure of intelligence is that we can't understand it? Intelligence through obfuscation? There should be no way for a designer to not be able to figure out why their machine produced what it did given enough debugging.

Re:Goal Post: Mysticism (3)

ornil (33732) | about 4 months ago | (#47421673)

The way I interpret the test is that the output must not be intended to be produced by some pre-programmed process. Not that you couldn't debug it which would obviously be impossible on anything short of a quantum computer.

On the other hand, I claim that if I train a neural network on some sheet music, it would be able to produce a new melody. And that melody would not be in any way pre-programmed (like a child learning from experience is not pre-programmed), and it will be original. Where can I collect my prize?

Re:Goal Post: Mysticism (2)

The Evil Atheist (2484676) | about 4 months ago | (#47421753)

Unless the panel of judges is a bunch of hipsters who will always say it sounds derivative.

Re:Goal Post: Mysticism (2, Funny)

Anonymous Coward | about 4 months ago | (#47422011)

Not if they heard it before it was cool, then the AI just sold out.

Re:Goal Post: Mysticism (1)

K. S. Kyosuke (729550) | about 4 months ago | (#47421677)

There should be no way for a designer to not be able to figure out why their machine produced what it did given enough debugging.

Well... [slashdot.org]

Re:Goal Post: Mysticism (1)

dbIII (701233) | about 4 months ago | (#47422141)

This is like looking at obscurity and declaring it a soul

That's the undergraduate view of AI that gets repeated at times in this place.

The measure of intelligence is that we can't understand it?

Not just yet, so instead of waiting until years of work is done understanding the physical basis of thought the impatient want some sort of measure now.

Re:Goal Post: Mysticism (0)

Anonymous Coward | about 4 months ago | (#47422145)

I had similar reaction to yours, but then I realized that this criteria is basically met by artificial neural network. They are designed, we know why they work and can prove mathematically that they have tendency to converge on solution. But once a solution is learned, it may be difficult to explain how that particular NN does what it does, since it is basically emergent behavior of many simple perceptions (or other models).

MMX and Zero (0)

Anonymous Coward | about 4 months ago | (#47422155)

Yeah, that's one of those things about Mega Man X that really went off on a tangent. I mean, at some level it makes sense: the halting problem should mean that reploids shouldn't be able to analyze other reploids. The problem, of course, was that it was only X and Zero (technically, neither reploids) that couldn't be fully analyzed--and at least Dr. Cain wasn't able to do it either, as the whole "sympathy circuit" thing being a large part of the whole maverick problem*, although Dr. Cain wasn't even a master of cybernetics but merely an archaeologist and obviously Dr. Light and Wily analyzed their own creations so it stands to reason that humans could do it too (whether Ceil's Copy X was exact...well, by technical canon it was).

In the end, it all speaks about trying to somehow try to differentiate on a point that doesn't really matter. You could still end up with a Philosophical zombie [wikipedia.org] . Yet for the purposes of AI, the issue is almost entirely about how indistinguishable the AI is from a "human"--really, a sufficiently advanced sapience not whether there's any sentience. And that's what the Turing test is fundamentally about. The reason the Turing test has so far failed us is that people keep wanting to use a crippled test so their pet AI can win a rigged game.

*At its core, being a maverick was not merely an issue of the maverick virus but that machine intelligence had become sufficiently advanced that it could choose to kill humans or otherwise place its own existence above serving man. Dr. Light either didn't sufficiently consider this and strived to make X "perfectly safe [for humans]"--which only adds up if Dr. Light had the forethought that X would be copied or might don a overlord suit (which Copy X's Seraphim form suggests was always there)--or he forever felt X would be inferior to humans and hence deserving of placing all humans above his own life.

The last part could make someone a hero if it were something of choice. But to enshrine it as a fact... And so it goes that what one considers a soul and what is free will are at odds. Or there's no such thing as a soul? :)

Humans fail this (0)

Anonymous Coward | about 4 months ago | (#47421625)

With some basis statistics, physics, chemistry and eventually some evolutionary psychology, I can explain how they came up with this test based on the original input (a bunch of hydrogen and some time).

Also, I've made programs that passed this test at least as much as I pass the test. Any program with bugs I can't figure out qualifies (I work in graphics, bugs = modernist paintings in a lot of cases).

I've also had such things happen when playing with fractal rendering algorithms: I write up some interesting algorithm, and get some new unexpected output. Usually I can figure it out eventually, but if I can't then its intelligent?

If my computer blue screens and even Microsoft can't understand why, its intelligent?

Even basic neural networks solve non-interesting problems in whats we don't really understand. We know how it got there (physics) but not the real source of the emergent behavior.

The definition present here is just useless: its open to interpretation so much so that it could include basically everything or nothing.

Frustration (0)

Anonymous Coward | about 4 months ago | (#47421641)

Computers create it every single day.

Most of the programs I write... (2)

RobertJ1729 (2640799) | about 4 months ago | (#47421649)

Most of the programs I write produce stuff I can't explain.

If it's as easy as that "Turing Test" was... (1)

Quinn_Inuit (760445) | about 4 months ago | (#47421653)

...then all the computer will have to do is string together a series of random English words till it puts together something that sounds like a short story written by a Hungarian first-grader for whom English is a second language.

I don't care what they call the test. It's useless if the grading rubric is rigged to allow any idiot to write something that passes. Now, if you'll excuse me, I'm going to go see if I can talk ELIZA into writing me something that would function as an epistolary novel.

Already happened? (2)

K. S. Kyosuke (729550) | about 4 months ago | (#47421655)

The machine's designers must not be able to explain how their original code led to this new program.

If I'm not mistaken, this has already happened when evolutionary algorithms were applied to hardware design: some slides [www-verimag.imag.fr] . The author of the program has no idea how the resulting circuit worked [bcs.org] .

Re:Already happened? (2)

geekoid (135745) | about 4 months ago | (#47421679)

It's actual happened a lot, it's called 'emergent behavior'. The paper is old, poorly thought out, and written by people who want other people to think that are smart, but aren't actually smart enough to do science, you know: philosophers.

remember kids: philosophers are to science what homeopaths are to medicine.

Re:Already happened? (1)

K. S. Kyosuke (729550) | about 4 months ago | (#47421709)

I know what emergent behavior is, I was merely making the point that it has already been observed in software systems and that it (at least from my POV) satisfies these requirements. (And what exactly is poorly thought out about Thompson's research?)

Re:Already happened? (1)

Culture20 (968837) | about 4 months ago | (#47421737)

people who want other people to think that are smart, but aren't actually smart enough to do science, you know: philosophers.

remember kids: philosophers are to science what homeopaths are to medicine.

And also remember that anyone with a Ph.D. in a science field isn't a scientist. They're a doctor of philosophy. Without philosophy, science doesn't exist.

Re:Already happened? (2)

The Evil Atheist (2484676) | about 4 months ago | (#47421777)

Without science, philosophy is useless. Philosophers have a bad habit of treating things as binary true or false and statistical answers are not acceptable. No philosopher I know has made any sense of Quantum Mechanics or natural selection so far and are completely beholden to science in modern times. The only philosophy that's worth pursuing these days is the philosophy of science itself, but even that is hitting its limits. I've been in too many debates where philosophers try to label science as "logical positivism" or some other ridiculous mischaracterization. Even science must now be looked at under a scientific lens and figure out what science actually is by looking at what scientists actually do rather than imposing philosophical strawmen.

Re:Already happened? (1)

Culture20 (968837) | about 4 months ago | (#47421863)

Without science, philosophy is useless.

Philosophy created science without science's help.

Re:Already happened? (0)

Anonymous Coward | about 4 months ago | (#47421901)

At best it codified science, but people were doing research well before spoken language.

Re:Already happened? (1)

K. S. Kyosuke (729550) | about 4 months ago | (#47422049)

Are you sure about that? I thought we were pretty sure that human speech is virtually required for any advanced cognition, at least for anything one would call "research".

Re:Already happened? (1)

The Evil Atheist (2484676) | about 4 months ago | (#47422091)

Except "human speech" can be anything. Complex language started out simple and people were experimenting then as well. Learning how to make fire and tools requires experimentation that would approach what we would call research.

Re:Already happened? (2)

The Evil Atheist (2484676) | about 4 months ago | (#47422073)

Bollocks.

Science was created because philosophy couldn't cut it. Galileo didn't bother trying to figure out the philosophical underpinnings of things rolling down planks or pendulum swings or the moons of Jupiter. He went straight to observations.

Already been done (1)

geekoid (135745) | about 4 months ago | (#47421657)

A computer infected with a work and a virus led to them combining into a new program.

It was better and unique.

Computer Chess (4, Insightful)

Jim Sadler (3430529) | about 4 months ago | (#47421683)

Oddly computer chess programs may already meet this criteria. The programs usually apply a weight or value to a move and a weight and a value to the consequences down stream of the move. But there are times when the consequences are of equal value at some event horizon and random choices must be applied. As a consequence sequences of moves may be made that no human has ever made and the programmer could not really predict either. As machines have gotten more able the event horizon is at a deeper level. But we might reach the point at which only the player playing white can ever hope to win and the player with black may always lose. We are not in danger of a human ever being able to do that unless we alter his brain.

Re: Computer Chess (1)

eric31415927 (861917) | about 4 months ago | (#47421903)

Chess algorithms are a measure of the budding ability of programers.

When you can write a chess algorithm that can beat yourself at chess, ...
(When you can snatch the pebble from my hand, ... [Kung Fu])

Computer Chess (1)

Anonymous Coward | about 4 months ago | (#47421977)

The programmer of a chess AI knows how it reached where it is. For instance, if it uses minimax the explanation would be along these lines: "on step 1, the evaluation function found x0 for move y0, ... xn for move yn. It selected move yk since no xi is greater than yk." On some cases explaining the behavior may be difficult, but if you spend enough time with traces you'll find the why eventually.

Besides, if simply being unable to explain how the program works makes it intelligent we must be ruled by an AI by now. If only all it took to solve problems was ignorance we'd have run out of problems to solve by now.

Chess no let's play global thermonuclear war (1)

Joe_Dragon (2206452) | about 4 months ago | (#47421987)

what side do you want?

Re:Computer Chess (0)

phantomfive (622387) | about 4 months ago | (#47422065)

lol their test is beaten by a random number generator. Oh well.

How many questions can YOU beg in one definition? (1)

jeffb (2.718) (1189693) | about 4 months ago | (#47421685)

What's a "program" ("anything")?

What does it mean to be "engineered to produce" one?

What's a "hardware fluke"?

What constitutes "explanation" of how it was done?

Not. Even. Wrong.

I predict (0)

Anonymous Coward | about 4 months ago | (#47421723)

I predict that the Turing test will be passed (truly and officially) well before the Turing test itself will be proven to actually be meaningful.

The Lovelace Test :( (-1)

Anonymous Coward | about 4 months ago | (#47421745)

The Lovelace test should be outlawed as a direct threat to the human species.

Hell, Eliza had me going for a bit. (1)

Trax3001BBS (2368736) | about 4 months ago | (#47421759)

Till it hit me it was looking for keywords to continue on, yes I was new
http://en.wikipedia.org/wiki/E... [wikipedia.org] the Doctor is in...

Monkeys (-1)

Anonymous Coward | about 4 months ago | (#47421881)

Million Monkeys write Shakespeare method.

Program generates random strings and then puts them into the compiler. When one successfully compiles it has beaten the test but is not what we would call intelligence.

Why I like programming (1)

mrprogrammerman (2736973) | about 4 months ago | (#47421905)

One of the things I love about programming is the moment you have to remind yourself that your program is simply executing algorithms that you told it. Depending on how clever the algorithms are it can appear as if the computer is thinking for itself. Programming allows you to encode intelligence in non-thinking machines.

seriously bad test (1)

bloodhawk (813939) | about 4 months ago | (#47421907)

"The machine's designers must not be able to explain how their original code led to this new program". I know plenty of programmers that can't explain how the hell their code managed to produce certain results, and trust me it has nothing to do with the servers mysteriously developing AI.

The meta-turing test (1)

swm (171547) | about 4 months ago | (#47421947)

The meta-Turing test counts a thing as intelligent if it seeks to devise and apply Turing tests to objects of its own creation.
--Lew Mammel, Jr.

Re:The meta-turing test (0)

Anonymous Coward | about 4 months ago | (#47422033)

wait, isn't that more like a recursive Turing test or something...a meta Turing test would be more like if the thing being Turing tested starts debating the validity of the Turing test to determine intelligence or some shit.

Asimov already covered this... (3, Insightful)

dlingman (1757250) | about 4 months ago | (#47421951)

http://en.wikiquote.org/wiki/I... [wikiquote.org] Detective Del Spooner: Human beings have dreams. Even dogs have dreams, but not you, you are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a... canvas into a beautiful masterpiece? Sonny: [With genuine interest] Can you?

loopmuch? (0)

Anonymous Coward | about 4 months ago | (#47421953)

We do realize that brains are computers, right? AI means intelligent machines designed by intelligent machines. its a 10 goto 10 line

No one is passing the Turing Test (2)

quantaman (517394) | about 4 months ago | (#47421961)

Just because someone sets some random people up for a five minute interview with a chatbot doesn't mean they're running a Turing Test.

Give people enough time to conduct a proper conversation, hell give them time to ask the chatbot for some original content. Do that and you'll be running a real Turing Test.

The reason you keep hearing about these simplified Turing Tests is those are the only tests people run because those are the only tests computers can pass. But passing a true Turing Test is still a great standard for detecting real AI, and something no one can even approach doing yet.

Hypothesis generator plus motivation calculator (1)

mburns (246458) | about 4 months ago | (#47422003)

Add together a hypothesis generator with a motivation calculator and a theorem prover. This has been shown long since to have the ability to regenerate number theory without further supervision by humans.

must be a black box! (1)

AndyCanfield (700565) | about 4 months ago | (#47422021)

The great thing about the Turing test was that it was a black box. It did not depend on assumptions about what the designers knew, or what hardware was used, or the like. And so far the only test trials I have heard of have been carefully arranged one on one. Give us a dozen Ukranian teen-agers, and pick the one (or two) which are non-human - that's a better test run.

But, of course, the ultimate test of machine intelligence is when the computer can sue your ass off and win in the Supreme Court.

Which Lovelace? (1)

AndyCanfield (700565) | about 4 months ago | (#47422025)

Ada Lovelace or Linda Lovelace? I volunteer for the Linda Lovelace test.

Well... (1)

readin (838620) | about 4 months ago | (#47422117)

A guy told me some 20 years ago that he read about an artificial life experiment in which a specially designed operating system was created to allow programs to execute code and, like computer viruses, reproduce themselves while competing for the resources to do so. He said the result was a program that copied itself very efficiently in a manner that the researchers found very hard to understand and was totally unexpected.

Sadly he couldn't explain the details and didn't know the experiment, but if what is says is true, did it pass the Lovelace test? It certainly seems like something that could have occurred given the capabilities of computers at the time.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?