Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Loebner Talks AI

timothy posted more than 5 years ago | from the well-tell-him-ai-back dept.

Programming 107

Mighty Squirrel writes "This is a fascinating interivew with Hugh Loebner, the academic who has arguably done more to promote the development of artifical intelligence than anyone else. He founded the Loebner prize in 1990 to promote the development of artificial intelligence by asking developers to create a machine which passes the Turing Test — meaning it responds in a way indistinguishable from a human. The latest running of the contest is this weekend and this article shows what an interesting and colourful character Loebner is."

cancel ×

107 comments

Sorry! There are no comments related to the filter you selected.

Don't forget Steve Furbur (5, Informative)

jd (1658) | more than 5 years ago | (#25341835)

He is the genius who brought the UK the BBC Micro, and is now studying the relationship between AI and biological neurons. His comments on the BBC website [bbc.co.uk] make very interesting reading regarding the problems facing AI and computer intelligence.

Re:Don't forget Steve Furbur (-1, Troll)

Anonymous Coward | more than 5 years ago | (#25341903)

How can you study the relationship between something that doesn't exist and a biological neurons? Maybe he should worry about getting the first variable initialized before running a comparison. Nobody likes NULL pointers.

Re:Don't forget Steve Furbur (5, Insightful)

Anonymous Coward | more than 5 years ago | (#25341997)

Intelligence is not an arbitrary point on a line, there are varying degrees.

Re:Don't forget Steve Furbur (1, Insightful)

Venik (915777) | more than 5 years ago | (#25344297)

It's either intelligence or it's not. The issue of varying degrees of intelligence should not concern AI developers at this time. They'll have to cross that bridge when they come to it. Right now they can't even see the bridge. I always found this intriguing: why would your average comp-sci specialist think he can recreate in code such a uniquely biological phenomenon as intelligence? Why not start with something simpler? Like, maybe, have one Vista PC fuck another to produce a new service pack?

Re:Don't forget Steve Furbur (1)

nine-times (778537) | more than 5 years ago | (#25347271)

It's either intelligence or it's not. The issue of varying degrees of intelligence should not concern AI developers at this time. They'll have to cross that bridge when they come to it

I don't agree-- I think they should definitely be thinking about the degrees of intelligence. Part of the problem with attempts to make AI, it seems to me, is that people have wasted a certain amount of time by getting ahead of themselves. You can't go from nothing to human-level intelligence through programming alone. People have rightly (IMO) realized that, if you ever want to develop real AI, you'll have to start simple, like trying to develop insect-level intelligence, and you have to look at how biological systems work as a model.

Personally, I think that you'll never see a real AI until someone can figure out how to give a machine desires/drives resembling those of biological organisms (e.g. hunger, sex, survival).

Re:Don't forget Steve Furbur (0)

Anonymous Coward | more than 5 years ago | (#25349347)

Like, maybe, have one Vista PC fuck another to produce a new service pack?

Ouch, woulda got 5+ instead of 2+ without that line. Nice to see the anger burning the right bushes though.

Re:Don't forget Steve Furbur (2, Insightful)

jd (1658) | more than 5 years ago | (#25342047)

BBC Basic always initialized variables on first use, so you're ok.

FASCINATING! (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#25341859)

JIM!

Re:FASCINATING! (0, Offtopic)

Yvan256 (722131) | more than 5 years ago | (#25342109)

How about "It's life, Jim, but not as we know it."?

It's not life until... (0, Offtopic)

thetoadwarrior (1268702) | more than 5 years ago | (#25341869)

Until it asks to see my computer with its door off and showing its top end bits the its not acting human.

Seriously if I jerk off to this hardware then a computer should!

Too much information (3, Insightful)

Anonymous Coward | more than 5 years ago | (#25342779)

Until it asks to see my computer with its door off and showing its top end bits the its not acting human. Seriously if I jerk off to this hardware then a computer should!

That's a little bit too much personal information there sport. You don't have to post to slashdot every thought that pops into your head you know.

This is news for nerds? (5, Informative)

malajerry (1378819) | more than 5 years ago | (#25341871)

Hardly a fascinating interview, more like 4 paragraphs and a soundbite or two, if you haven't read TFA, don't bother.

Re:This is news for nerds? (0, Offtopic)

jd (1658) | more than 5 years ago | (#25341915)

Which is why I think some of the other interviews being published on AI are far more interesting. But that could just be me.

Re:This is news for nerds? (5, Funny)

Anonymous Coward | more than 5 years ago | (#25341921)

if you haven't read TFA

Way ahead of you!

Re:This is news for nerds? (0)

Anonymous Coward | more than 5 years ago | (#25341955)

I haven't even read the summary!

Re:This is news for nerds? (0)

Anonymous Coward | more than 5 years ago | (#25342015)

I'm just here for the comments!

Re:This is news for nerds? (2, Funny)

Mr Thinly Sliced (73041) | more than 5 years ago | (#25342581)

I'm just here for the comments!

I'm not even here!

Re:This is news for nerds? (1)

Hurricane78 (562437) | more than 5 years ago | (#25343303)

Hello, you tried to post in my RSS feed. I'm not here at the moment. Please leave a message after the Sig.

let me be the first to say (0, Offtopic)

doyoulikegoatseeee (930088) | more than 5 years ago | (#25341881)

FUCK THIS GUY

Re:let me be the first to say (0, Offtopic)

MobileTatsu-NJG (946591) | more than 5 years ago | (#25342525)

FUCK THIS GUY

Well.. yes.. okay, I guess you could do that as another test for Turing... um.. yeah...

Re:let me be the first to say (0)

Anonymous Coward | more than 5 years ago | (#25344885)

John Connor, is that you?

Arguably? (4, Interesting)

mangu (126918) | more than 5 years ago | (#25341911)

the academic who has arguably done more to promote the development of artificial intelligence than anyone else

Well, I suppose someone could argue that. But it would be a pretty weak argument. I could cite at least a hundred researchers who are better known and have done more important contributions. to the field of AI.

A waste of time. (5, Informative)

Anonymous Coward | more than 5 years ago | (#25342463)

The Loebner Prize is a farce. Read all about it: http://dir.salon.com/story/tech/feature/2003/02/26/loebner_part_one/index.html

Re:Arguably? (4, Insightful)

sketerpot (454020) | more than 5 years ago | (#25342747)

The purpose of the Turing test was to make a point: if an artificial intelligence is indistinguishable from a natural intelligence, then why should one be treated differently from the other? It's an argument against biological chauvinism.

What Loebner has done is promote a theatrical parody of this concept: have people chat with chatterbots or other people and try to tell the difference. By far the easiest way to score well in the Loebner prize contest is to fake it. Have a vast repertoire of canned lines and try to figure out when to use them. Maybe throw in some fancy sentence parsing, maybe some slightly fancier stuff. That'll get quick results, but it has fundamental limitations. For example, how would it handle anything requiring computer vision? Or spatial reasoning? Or learning about fields that it was not specifically designed for?

It sometimes seems that the hardest part of AI is the things that our nervous systems do automatically, like image recognition, controlling our limbs, and auditory processing. It's a pity the Loebner prize overlooks all that stuff in favor of a cheap flashy spectacle.

Re:Arguably? (2, Interesting)

Workaphobia (931620) | more than 5 years ago | (#25344387)

Amen. The whole point of the Turing Test was to express a functionalist viewpoint of the world, that two blackboxes with the same interface are morally and philosophically substitutable. And this whole media-fueled notion of the Turing Test as a milestone on the road to machine supremacy just muddles the point.

Re:Arguably? (0)

Anonymous Coward | more than 5 years ago | (#25344337)

I have to concur, being an actual researcher in the field. That guy is literally unknown by the community.
Not in any conference program committee. Has not written a single paper at an AI conference. Ever. Not even a reviewer
anywhere that I am aware of. check http://www.ijcai.org/ [ijcai.org] for instance.
or his own homepage : http://www.loebner.net/ [loebner.net]
quite scary actually ! (check the patent list)

Sorry, Loebner Has Done Nothing for AI (2, Insightful)

Louis Savain (65843) | more than 5 years ago | (#25342003)

The Turing test has nothing to do with AI, sorry. It's just a test for programs that can put text strings together in order to fool people into believing they're intelligent. My dog is highly intelligent and yet it would fail this test. The Turing test is an ancient relic of the symbolic age of AI research history. And, as we all should know by now, after half a century of hype, symbolic AI has proved to be an absolute failure.

Re:Sorry, Loebner Has Done Nothing for AI (4, Insightful)

Tx (96709) | more than 5 years ago | (#25342065)

Just because the best-scoring programs to date on the Turing test are crap does not necessarily mean the test itself is not useful. Yes, it tests for one particular form of AI, but that form would be extremely useful to have if achieved. You may consider your dog highly intelligent, but I'm not likely to want to call it up and ask for advice on any given issue, am I?

Re:Sorry, Loebner Has Done Nothing for AI (2, Interesting)

Louis Savain (65843) | more than 5 years ago | (#25342143)

Intelligence, artificial or otherwise, is what psychologists define it to be. It has to do with things like classical and operant conditioning, learning, pattern recognition, anticipation, short and long term memory retention, associative memory, aversive and appetitive behavior, adaptation, etc. This is the reason that the Turing test and symbolic AI have nothing to do with intelligence: they are not conserned with any of those things.

That being said, I doubt that anything interesting or useful can be learned from writing those Loebner/Turing programs. It would be much more interesting to write programs that learn to play a good game of GO. I suggest that Loebner change his competition accordingly.

Re:Sorry, Loebner Has Done Nothing for AI (5, Insightful)

Your.Master (1088569) | more than 5 years ago | (#25342261)

Wait...who made psychologists the masters of the term "intelligence" and all derivations thereof?

No, frankly, they can't have that term. And you can't decide what is interesting and is uninteresting.

If I revealed to you right now that I'm a machine writing this response, that would not interest you at all? I'm not a machine. But the point of the Turing test is that I could, in fact, be any Turing-test beating machine rather than a human. Sure, it's a damn Chinese room. But it's still good for talking to.

Whether or not your dog has intelligence has nothing to do with this, because AI is not robot dog manufacturing.

Re:Sorry, Loebner Has Done Nothing for AI (2, Interesting)

retchdog (1319261) | more than 5 years ago | (#25342353)

It would be interesting, except that any reasonable person would conclude that either 1) you were lying; or 2) you (the machine) were following a ruleset which would break after just a few more posts.

AI would be better off focusing on dogs. It's actually better off focusing on practical energy-minimization and heuristic search methods, which would be comparable in intelligence to say an amoeba. Going for human-level intelligence right now, is like getting started in math by proving the Riemann hypothesis.

Re:Sorry, Loebner Has Done Nothing for AI (1)

bogwoppit (1181803) | more than 5 years ago | (#25349049)

This is a valid point. Arguably the primary aim of a prize is to establish motivation towards achieving a goal. If the goal of the prize is to create computer programs that can fool humans into thinking they are talking to other humans (hopefully making more sense than your average YouTube comment), fine. But it's unlikely to do anything for advancing "artificial intelligence", for the reason that efforts are clearly converging on a "local minimum": sentence analysis and programmed responses.

Achieving something like "real" adaptive learning is not likely to come about in one massive jump, but rather through small steps of evolution. Unfortunately, these steps are not rewarded in any way by this prize.

It would be good to have a test that does satisfy this goal, the problem is that it would be very difficult to design. The reason the Turing test is "successful" is that it can assess across the whole range of ability - programs initially perform poorly, fooling someone for a few sentences, then get better and better, fooling more and more people for longer periods. To do something similar for "intelligence" is going to be difficult or impossible. There are similar tests for specific fields of robotics though - adaptive navigation for instance.

Re:Sorry, Loebner Has Done Nothing for AI (2, Insightful)

ShakaUVM (157947) | more than 5 years ago | (#25342341)

>>Intelligence, artificial or otherwise, is what psychologists define it to be

No, it's not. Or it least it shouldn't be, given how psychology is a soft science, where you can basically write any thesis you want (rage is caused by turning off video games!), give it a window dressing of stats (you know, to try to borrow some of the credibility of real sciences), and then send it off to be published.

Given how much blatantly wrong stuff is found in psychology, like Skinner's claim that a person can only be intelligent when reacting to outside stimulus (really? I can't think while I'm by myself?), I'd try and steer far away from giving psychologists the ability to define what intelligence is.

Re:Sorry, Loebner Has Done Nothing for AI (1)

ClassMyAss (976281) | more than 5 years ago | (#25347241)

Intelligence, artificial or otherwise, is what psychologists define it to be.

Not likely - IMO, intelligence will always be defined as "whatever humans can do that machines can't." When machines can pass the Turing test, people will start to shift and think of emotions as the most important quality of intelligence; if machines appear to have emotions, people will just assert that machines have no intuition; from there, we can start to argue over whether machines have subjective experience of the world, at which point a lot of people will just assert "No!" and we'll argue forever.

Of course, by that time it will be irrelevant, as we'll be the inferior ones and the computers will really be in charge of things. So it will fall to us to convince them that we need to have rights, not the other way around.

Re:Sorry, Loebner Has Done Nothing for AI (0)

Anonymous Coward | more than 5 years ago | (#25342373)

If you live near a coast you may want to consider asking your pet's advice on how to avoid impending doom [nationalgeographic.com] . They seem to have a much better handle on it than us. Possums get stunned by car headlights and don't move and we assume they're not intelligent yet our species sits on the beach when giant waves of destruction are coming. The difference is that if anything gets in front of a speeding car its going to have a tough time avoiding it. Apparently that's not the case with tsunamis. I do agree with you that a computer that passing a turing test would be somewhat useful. Maybe then we could get some consistent answers from customer service.

Re:Sorry, Loebner Has Done Nothing for AI (2, Interesting)

grumbel (592662) | more than 5 years ago | (#25342413)

Yes, it tests for one particular form of AI, but that form would be extremely useful to have if achieved.

The problem I have with the test is that it isn't about creating an AI, but about creating something that behaves like human. Most AI, even if highly intelligent, will never behave anything like a human, simply because its something vastly different and build for very different tasks. Now one could of course try to teach that AI to fool a human, but then its simply a game of how good it can cheat, not something that tells you much about its intelligence.

I prefer things like the DARPA Grand Challenge where the goal isn't to create something that behaves like a human, but simply something that gets the given task done. That way you can slowly raise the bar instead of setting a goal where you don't even know if there is a point in chasing it. Turning test feels to much like a challenge to fly by sticking wings to your arms, you might be able to do that one day, but the aviation industry doesn't really care, jumbo jets are flying fine even without flapping their wings.

Re:Sorry, Loebner Has Done Nothing for AI (2, Funny)

cerberusss (660701) | more than 5 years ago | (#25344149)

Most AI, even if highly intelligent, will never behave anything like a human, simply because its something vastly different and build for very different tasks.

Yep. Like hunting down and destroying said human.

Re:Sorry, Loebner Has Done Nothing for AI (1)

Jugalator (259273) | more than 5 years ago | (#25344881)

You may consider your dog highly intelligent, but I'm not likely to want to call it up and ask for advice on any given issue, am I?

One step at a time here though. I'd still prefer a dog compared to anything we have now. At least you can teach a dog to get your newspaper in the morning. We aren't even pushing into that field strongly yet with computers.

Re:Sorry, Loebner Has Done Nothing for AI (5, Insightful)

bencoder (1197139) | more than 5 years ago | (#25342093)

Well that's really the point of the test. Any "AI" that simply manipulates text as symbols is going to fail the turing test. To make one that can pass the test, imho, would probably require years of training it to speak, like one would with a child. It also requires solving all the associated problems of reference - how can a deaf, blind and anesthesic child truly get a sense of what something is, so much so that they can talk about it(or type about it, assuming they have some kind of direct computer hook up which allows them to read and write text).

Basically, nothing's going to pass the turing test until we have actual AI. Which is the whole point of the test!

I study AI at Reading by the way so I'll be going along to the event tomorrow morning :)

Re:Sorry, Loebner Has Done Nothing for AI (1)

uassholes (1179143) | more than 5 years ago | (#25344825)

But first you have to make machine intelligent. Then you have to make it imitate a human. Those are not the same things.

Re:Sorry, Loebner Has Done Nothing for AI (2, Interesting)

bencoder (1197139) | more than 5 years ago | (#25346079)

That's true, but it doesnt deny that the Turing test is capable of detecting a real AI over one that just "fakes" it. The main problem here is that when people are looking for "intelligence" they don't really know what they are looking for. Turing test offers one solution to this problem(can it talk like a human). It is certainly not the be-all and end-all of intelligence/thought tests and I don't think anyone ever stated it was.

Unfortunatly, no-one can come up with a suitable, testable, definition of intelligence. Turing's answer is to say that the problem is meaningless because there can be no definition of such an abstract concept, so he devised the test as a way of showing that, in a very behaviourist manner, it doesnt matter. If it can act intelligent according to a human without bias(hence the seperation between human and machine) then there's nothing that can be said to deny that the machine is intelligent. No doubt people will claim it's not, but those people will always have bias against a machine.

Re:Sorry, Loebner Has Done Nothing for AI (1)

fuzzyfuzzyfungus (1223518) | more than 5 years ago | (#25342123)

I agree that "natural language ability = AI" is incorrect; but that doesn't mean that knowing how to process and generate plausible natural language seems like a worthy goal, and quite useful for a fair number of things("as though millions of call-center drones cried out in terror, and were suddenly fired...").

Being able to draw novel inferences, being able to deal with imperfect sensor data in complex and unpredictable environments, and other tasks are also important AI challenges, and might well have nothing to do with the Turing test; but that doesn't mean that the Turing test is useless.

As you say, dogs know jack-all about natural language work; but are quite clever at a wide variety of other stuff; but we aren't hoping for dog level AI(though it would be very useful for robotics), we are hoping for human level AI, which includes(but is not limited to) natural language skill.

Stop whining. (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#25342131)

nt

Re:Sorry, Loebner Has Done Nothing for AI (2, Funny)

jbsooter (1222994) | more than 5 years ago | (#25342133)

I always imagine the first, and last, computer that will pass the Turing Test will be the one explaining to us that it has taken over the world because we aren't intelligent enough to run it ourselves. :P It will end the conversation with "You really didn't see this coming? What a bunch of idiots."

Re:Sorry, Loebner Has Done Nothing for AI (2, Funny)

Your.Master (1088569) | more than 5 years ago | (#25342287)

But you did see it coming. And it's on the Internet, which such a machine would have much easier access to and could search much more instantaneously than we. Which means its failure to notice this prediction is a sign of laziness and/or intellectual defect.

Which gives the human race hope.

Re: "Taken over & run the world" (1)

TaoPhoenix (980487) | more than 5 years ago | (#25343873)

"When Harlie Was One" - David Gerrold.

You are sufficiently flip for me to assume you are reinventing a round transportation object rather than cribbing.

Re:Sorry, Loebner Has Done Nothing for AI (2, Interesting)

geomobile (1312099) | more than 5 years ago | (#25342155)

Isn't just that the point of the Turing test: if you can fool people into believing you're intelligent, then you are intelligent? No way to tell if something is intelligent apart from its behavior. Lab conditions using language (string manipulation) were probably chosen because the amount of context and the variety of problems that could be encountered during the test are only solvable by humans (until now), and only using that which we call intelligence.

I know this was true for playing chess at a decent level twenty years ago. Ok, probably we'll never accept the intelligence of anything the inner workings of which we understand completely.

But that is not a problem of the test, it is a problem of our willingness to define intelligence. So, no proof of artificial intelligence possible for us, ever. News at eleven.

At least the dog argument doesn't hold. The Turing test does not claim to be able to define all types of intelligence. Also: future dog-driven string manipulation still possible.

Re:Sorry, Loebner Has Done Nothing for AI (3, Funny)

thrillseeker (518224) | more than 5 years ago | (#25342387)

if you can fool people into believing you're intelligent, then you are intelligent?

as my ethics teacher said, "Sincerity is the most important thing ... once you can fake that, you've got it made."

Re:Sorry, Loebner Has Done Nothing for AI (0)

Anonymous Coward | more than 5 years ago | (#25345591)

"No way to tell if something is intelligent apart from its behavior."

I don't know about that. Jeff Hawkins thinks that intelligence is not defined by behavior but by the accuracy of the predictions it makes about the world -- something that is implicit in our test taking and such but is not recognized formally.

Re:Sorry, Loebner Has Done Nothing for AI (2, Informative)

ClassMyAss (976281) | more than 5 years ago | (#25347267)

Jeff Hawkins thinks that intelligence is not defined by behavior but by the accuracy of the predictions it makes about the world

Careful - Hawkins doesn't just think the predictions about the world are important, he thinks that the real magic comes when the system tries to predict its own behavior. Without that self-referential prediction, the essential non-linearity that intelligence and perception requires is not present.

Whether this is enough is another matter...

Re:Sorry, Loebner Has Done Nothing for AI (4, Insightful)

jd (1658) | more than 5 years ago | (#25342169)

The Turing Test, as classically described in books, is not that useful, but the Turing Test, as imagined by Turing, is extremely useful. The idea of the test is that even when you can't measure or define something, you can usually compare it with a known quantity and see if they look similar enough. It's no different from the proof of Fermat's Last Theorum that compared two types of infinity because you couldn't compare the sets directly.

The notion of the Turing Test being simple string manipulation dates back to using Elisa as an early example of sentence parsing in AI classes. Really, the Turing Test is rather more sophisticated. It requires that the machine be indistinguishable from a person, when you black-box test them both. In principle, humans can perform experiments (physical and thought), show lateral thinking, demonstrate imagination and artistic creativity, and so on. The Turing Test does not constrain the judges from testing such stuff, and indeed requires it as these are all facets of what defines intelligence and distinguishes it from mere string manipulation.

If a computer cannot demonstrate modeling the world internally in an analytical, predictive -and- speculative manner, I would regard it as failing the Turing Test. Whether all humans would pass such a test is another matter. I would argue Creationists don't exhibit intelligence, so should be excluded from such an analysis.

Re:Sorry, Loebner Has Done Nothing for AI (1, Flamebait)

Louis Savain (65843) | more than 5 years ago | (#25342995)

I would argue Creationists don't exhibit intelligence, so should be excluded from such an analysis.

This being Slashdot and all, the bastion of atheist nerds, I would argue that your comment was modded up primarily because of that last sentence. The fact remains that the Turing test is a stupid test. Passing the test would prove absolutely nothing other than that a programmer can be clever enough to fool some dumb judges. Computer programs routinely fool people into believing they're human. Some of the bots on usenet do it all the time. Big deal.

Like I said, intelligence is about pattern recognition, operant and classical conditioning, anticipation, goal-directed and adaptive behavior. Over the last centrury, psychologists have perfected excellent procedures to test for those things. Use those tests instead. Anything else is spinning your wheels; or just another way of worshiping Alan Turing, a man who really contributed nothing interesting or useful to the field of AI that anybody can point to.

The truth is that Turing is a false god. Even the so-called Turing machine is completely useless as far as helping with the really nasty problems (e.g., the parallel programming and software reliability crises) that the computer industry is currently struggling with. In fact, I would maintain that it is the academic community's enfatuation with the Turing machine that got us into this sorry mess in the first place. I think it is time for the Turing madness to end. I always tell it like I see it.

Re:Sorry, Loebner Has Done Nothing for AI (2, Informative)

jd (1658) | more than 5 years ago | (#25343129)

Intelligence goes way beyond those limited parameters, which is why no psychologist or AI expert would claim to know what intelligence actually, fundamentally, is. Sure, it includes all of those, but there are many examples of intelligence which don't fit any of those categories, and many examples of non-intelligence which do.

Re:Sorry, Loebner Has Done Nothing for AI (1)

lysergic.acid (845423) | more than 5 years ago | (#25342215)

yea, the turing test sounds like a good idea at first, but i think it's fundamentally flawed. turing has made huge contributions to society and human knowledge, but the turing test has lead AI research down a dead end.

human communication is an extremely high level cognitive ability that is learned over time. we are the only animal that demonstrate this level of intelligence, and even with humans, if speech is not learned within a small window of mental development, that individual will never learn how to communicate properly.

so while verbal communication is certainly a sign of intelligence, it's not a prerequisite, and we need to focus on achieving a more rudimentary understanding of intelligence before trying to tackle such lofty goals.

IMO neural nets seem like the way to go. if we can mimic the intelligence of a cockroach then we'll have have achieved a huge breakthrough in AI. and from there, we can start to think about scaling up. but otherwise this is like trying to build a stealth bomber before you even understand the mechanics of flight.

Re:Sorry, Loebner Has Done Nothing for AI (1)

ShakaUVM (157947) | more than 5 years ago | (#25342379)

>>IMO neural nets seem like the way to go. if we can mimic the intelligence of a cockroach

But neural nets don't actually mimic the way that the brain works. They're statistical engines which are called neural nets because they are kind of hooked up in a kinda-sorta way that kinda-sorta looks like neurons if you don't study it too hard. Really, all they are are classifiers that carve up an N-dimensional space into different regions, like spam and not-spam, or missile and not-missile.

Actual neurons function totally differently. I know; I've written both.

Re:Sorry, Loebner Has Done Nothing for AI (1)

lysergic.acid (845423) | more than 5 years ago | (#25342801)

well, aren't there two types of artificial neural nets, one specifically used in AI and another for cognitive modeling? there's no need for AI neural nets to recreate all of the functions of an actual biological neural network, whereas cognitive modeling neural nets do try to realistically simulate the biological processes of the brain (ie. the release of dopamine and its effects).

as i understand it, AI neural nets have been successfully implemented for speech recognition, adaptive control, and image analysis. sure, these are application-specific neural nets and they in no way resemble how the human brain works, but they do show that neural nets are a viable direction of research for practical AI. of course, if we want to create "true" AI then it may be important to use cognitive modeling to better understand how the brain actually works, and no doubt such research will contribute to more advanced AI neural nets in the future.

the point is, true intelligence seems to be an emergence phenomenon. you can't create true AI by simply simulating the emergent behaviors such as linguistic communication. the best you can hope to achieve through such efforts is creating the digital equivalent of a talking parrot. it may give the impression of intelligence at first, but under closer scrutiny it becomes abundantly clear that it's all just a trained act.

artificial neural nets at least try to create an adaptive system, which is the basis of machine learning. it may be a while before we can create an artificial neural net as complex as a cockroach brain, but this bottom-up approach shows more promise than the top-down approach that turing-test-related research aims at achieving.

Re:Sorry, Loebner Has Done Nothing for AI (1)

Rockoon (1252108) | more than 5 years ago | (#25344211)

The bottom-up approach doesn't show any more promise.

The idea that since bottom-up gives good results in regards to Knowledge that it is evidence that bottom-up is making strides into A.I. is wrong. Its making strides in knowledge representation and that, my friend, is not A.I. Yes, some researchers and practicioners like to kid themselves into thinking they are dealing with A.I. but they are not. They certainly leverage the dead-ends of the A.I. field, but that doesnt make it A.I.

A.I. is machine-based problem solving, pure and simple. There are some very successfull renditions of this, such as game tree searching techniques (AlphaBeta, MID(f), NegaScout, and so on) as well as an entire subfield of A.I. called Machine Learning.

Neural Nets are not A.I., although the training techniques used might be. Neural Nets themselves are simply function approximators that have no more problem solving ability than their training sets had.

Re: Varying degrees on intelligence (1)

TaoPhoenix (980487) | more than 5 years ago | (#25343903)

I've been fiddling with some beyond-ultra-rough concepts using the opposite premise of "what if people are *often* as dumb as we complain they are?".

For example, low grade trolls. Because such comments collapse into faux logic, they would be hit first by Eliza programs. Collect enough MicroDomains, and eventually you converge onto a low-grade person.

Re:Sorry, Loebner Has Done Nothing for AI (3, Insightful)

ceoyoyo (59147) | more than 5 years ago | (#25342531)

You're confusing the Turing test with one class of attempts to pass it. In fact, the test has proven remarkably good at failing that sort of program.

Yes, your dog would fail the Turing test, because the Turing test is designed to test for human level intelligence.

Can they reason? (0)

Anonymous Coward | more than 5 years ago | (#25348579)

The question is not, "Can they reason?" nor, "Can they talk?" but, "Can they suffer?"

Fascinating article? (1)

Alarindris (1253418) | more than 5 years ago | (#25342025)

For those who didn't RTFA.

Guy has a contest to see if anyone can create a program that will pass the Turing test.

That's it.

Re:Fascinating article? (1)

SpinyNorman (33776) | more than 5 years ago | (#25342083)

Really! I've had farts that were more interesting, and probably also more relevant to AI.

Since when did Eliza-like chatbots become considered as AI?!

Re:Fascinating article? (1)

jd (1658) | more than 5 years ago | (#25342097)

Strong or Weak Turing Test? Makes a big difference. Especially if he wants to be credible to Real Geeks.

Re:Fascinating article? (1)

ypctx (1324269) | more than 5 years ago | (#25342283)

thanks. yawn. next story.

Re: Foolproof question (0)

Anonymous Coward | more than 5 years ago | (#25343577)

Just ask the test taker his or her sexual preference. If the response is "PNP" or "NPN", you know it's either a computer or a real person with a really perverse fetish.

Re: "Just Ask" (4, Funny)

TaoPhoenix (980487) | more than 5 years ago | (#25343925)

Really now, I wish I had a team partner, because these guys need to take a page from the chess world and buff up their Anti-Trick-Question tactics. Those questions always revolve around rapid context switching that would frankly irritate if not confuse a person as well, such as one speaking a second language. (There's a test for you! Which is the computer and which is the guy speaking his ruined French he learned 20 years ago?)

(Typical Tester fake question) "Is the Queen larger than a breadbox?"
Program: "What kind of question is that?"
Tester: "Answer the question"
Program: "Since you failed to define "Queen" on purpose, you created a question that is simultaneously true and false, and therefore a null question. I can only assume this is some cheap ass attempt to authenticate before you waste your remaining 7 minutes chatting with the human should you be so lucky, so I quit here and now. Ask your judge what to do if your software oppponent is programmed to sulk."

What about... (0)

Anonymous Coward | more than 5 years ago | (#25342029)


What about Chris McKinstry (RIP)? He as a giant among the AI elite.

Oh wait, no, he was a kook.

read a demo recently. Has nothing on Eliza. (1)

Iowan41 (1139959) | more than 5 years ago | (#25342035)

You'd think they'd be doing better than that by now. Even just doing something like Eliza.

A pass of the Turing test should be easy.... (1)

3seas (184403) | more than 5 years ago | (#25342039)

....easier than a human proving they are human, as many people are artificial enough to fail the test or pass someones programming attempt to pass the test.

Re:A pass of the Turing test should be easy.... (1)

SpinyNorman (33776) | more than 5 years ago | (#25344813)

So maybe Loebner should lower the bar a little. First step should be to see if a chatbot can appear as intelligent as Paris Hilton (or Sarah Palin, if Paris is too tough), then they can work on the human-level intellence later.

But can it speak chinese? (0)

Anonymous Coward | more than 5 years ago | (#25342071)

in a room?

Let's forget the turing test (3, Insightful)

Yvan256 (722131) | more than 5 years ago | (#25342117)

and use the Voight-Kampff test instead.

Re:Let's forget the turing test (2, Funny)

Anonymous Coward | more than 5 years ago | (#25342621)

Is this testing whether I'm intelligent or a lesbian?

Re:Let's forget the turing test (1)

cerberusss (660701) | more than 5 years ago | (#25344161)

And here's the Wikipedia entry on the Voight-Kampff machine [wikipedia.org] .

On a related note, the great book is available as an audio book right here [thepiratebay.org]

creators promote real intelligence, spirituality (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#25342159)

the freely available newclear powered kode is way user friendly, & there are no gadgets required. you can 'play' along if you wish.

greed, fear & ego are unprecedented evile's primary weapons. those, along with deception & coercion, helps most of us remain (unwittingly?) dependent on its' life0cidal hired goons' agenda. most of yOUR dwindling resources are being squandered on the 'wars', & continuation of the billionerrors stock markup FraUD/pyramid schemes. nobody ever mentions the real long term costs of those debacles in both life & any notion of prosperity for us, or our children, not to mention the abuse of the consciences of those of us who still have one. see you on the other side of it. the lights are coming up all over now. conspiracy theorists are being vindicated. some might choose a tin umbrella to go with their hats. the fairytail is winding down now. let your conscience be yOUR guide. you can be more helpful than you might have imagined. there are still some choices. if they do not suit you, consider the likely results of continuing to follow the corepirate nazi hypenosys story LIEn, whereas anything of relevance is replaced almost instantly with pr ?firm? scriptdead mindphuking propaganda or 'celebrity' trivia 'foam'. meanwhile; don't forget to get a little more oxygen on yOUR brain, & look up in the sky from time to time, starting early in the day. there's lots going on up there.

http://news.google.com/?ncl=1216734813&hl=en&topic=n
http://www.nytimes.com/2007/12/31/opinion/31mon1.html?em&ex=1199336400&en=c4b5414371631707&ei=5087%0A
http://news.yahoo.com/s/ap/20080918/ap_on_re_us/tent_cities;_ylt=A0wNcyS6yNJIZBoBSxKs0NUE
http://www.nytimes.com/2008/05/29/world/29amnesty.html?hp
http://www.cnn.com/2008/US/06/02/nasa.global.warming.ap/index.html
http://www.cnn.com/2008/US/weather/06/05/severe.weather.ap/index.html
http://www.cnn.com/2008/US/weather/06/02/honore.preparedness/index.html
http://www.cnn.com/2008/TECH/science/09/28/what.matters.meltdown/index.html#cnnSTCText
http://www.cnn.com/2008/SHOWBIZ/books/10/07/atwood.debt/index.html
http://www.nytimes.com/2008/06/01/opinion/01dowd.html?em&ex=1212638400&en=744b7cebc86723e5&ei=5087%0A
http://www.cnn.com/2008/POLITICS/06/05/senate.iraq/index.html
http://www.nytimes.com/2008/06/17/washington/17contractor.html?hp
http://www.nytimes.com/2008/07/03/world/middleeast/03kurdistan.html?_r=1&hp&oref=slogin
http://biz.yahoo.com/ap/080708/cheney_climate.html
http://news.yahoo.com/s/politico/20080805/pl_politico/12308;_ylt=A0wNcxTPdJhILAYAVQms0NUE
http://www.cnn.com/2008/POLITICS/09/18/voting.problems/index.html
http://news.yahoo.com/s/nm/20080903/ts_nm/environment_arctic_dc;_ylt=A0wNcwhhcb5It3EBoy2s0NUE
(talk about cowardlly race fixing/bad theater/fiction?) http://money.cnn.com/2008/09/19/news/economy/sec_short_selling/index.htm?cnn=yes
http://us.lrd.yahoo.com/_ylt=ApTbxRfLnscxaGGuCocWlwq7YWsA/SIG=11qicue6l/**http%3A//biz.yahoo.com/ap/081006/meltdown_kashkari.html
http://www.nytimes.com/2008/10/04/opinion/04sat1.html?_r=1&oref=slogin
(the teaching of hate as a way of 'life' synonymous with failed dictatorships) http://news.yahoo.com/s/ap/20081004/ap_on_re_us/newspapers_islam_dvd;_ylt=A0wNcwWdfudITHkACAus0NUE
(some yoga & yogurt makes killing/getting killed less stressful) http://news.yahoo.com/s/ap/20081007/ap_on_re_us/warrior_mind;_ylt=A0wNcw9iXutIPkMBwzGs0NUE

is it time to get real yet? A LOT of energy is being squandered in attempts to keep US in the dark. in the end (give or take a few 1000 years), the creators will prevail (world without end, etc...), as it has always been. the process of gaining yOUR release from the current hostage situation may not be what you might think it is. butt of course, most of US don't know, or care what a precarious/fatal situation we're in. for example; the insidious attempts by the felonious corepirate nazi execrable to block the suns' light, interfering with a requirement (sunlight) for us to stay healthy/alive. it's likely not good for yOUR health/memories 'else they'd be bragging about it? we're intending for the whoreabully deceptive (they'll do ANYTHING for a bit more monIE/power) felons to give up/fail even further, in attempting to control the 'weather', as well as a # of other things/events.

http://www.google.com/search?hl=en&q=weather+manipulation&btnG=Search
http://video.google.com/videosearch?hl=en&q=video+cloud+spraying

'The current rate of extinction is around 10 to 100 times the usual background level, and has been elevated above the background level since the Pleistocene. The current extinction rate is more rapid than in any other extinction event in earth history, and 50% of species could be extinct by the end of this century. While the role of humans is unclear in the longer-term extinction pattern, it is clear that factors such as deforestation, habitat destruction, hunting, the introduction of non-native species, pollution and climate change have reduced biodiversity profoundly.' (wiki)

"I think the bottom line is, what kind of a world do you want to leave for your children," Andrew Smith, a professor in the Arizona State University School of Life Sciences, said in a telephone interview. "How impoverished we would be if we lost 25 percent of the world's mammals," said Smith, one of more than 100 co-authors of the report. "Within our lifetime hundreds of species could be lost as a result of our own actions, a frightening sign of what is happening to the ecosystems where they live," added Julia Marton-Lefevre, IUCN director general. "We must now set clear targets for the future to reverse this trend to ensure that our enduring legacy is not to wipe out many of our closest relatives."
consult with/trust in yOUR creators. providing more than enough of everything for everyone (without any distracting/spiritdead personal gain motives), whilst badtolling unprecedented evile, using an unlimited supply of newclear power, since/until forever. see you there?

"If my people, which are called by my name, shall humble themselves, and pray, and seek my face, and turn from their wicked ways; then will I hear from heaven, and will forgive their sin, and will heal their land."

Re:creators promote real intelligence, spiritualit (1)

jd (1658) | more than 5 years ago | (#25342521)

Just in: Creators go on to promote new book on chat-show, then win the Eurovision Song Contest.

Real AI is still a long way out.. (2, Insightful)

thrillbert (146343) | more than 5 years ago | (#25342405)

Unless someone can figure out how to make a program want something.

If you take the lower life forms into consideration, you can teach a dog to sit, lay down and roll over.. what do they want? Positive encouragement, a rub on the belly or even a treat.

But how do you teach a program to want something?

Word of caution though, don't make the mistake made in so many movies.. don't teach the program to want more information.. ;)

Re:Real AI is still a long way out.. (1)

ceoyoyo (59147) | more than 5 years ago | (#25342537)

It's quite easy to program motives or goals. Way back in university we built robots and gave them a set of "wants," each with a weighting so if they came into conflict they could be ordered.

Re:Real AI is still a long way out.. (1)

johanatan (1159309) | more than 5 years ago | (#25342543)

Machines can never want anything. But, that isn't necessary either.

Simply hard-code the machine to learn without positive encouragement. The 'encouragement' requirement is a downside that can be altogether eliminated in truly intelligent machines. Why should we try to build in a detriment in an artificially constructed environment when we can start with a clean slate?

Re:Real AI is still a long way out.. (2, Insightful)

thrillbert (146343) | more than 5 years ago | (#25343113)

Teaching it to want something is not a detriment, because then it can be taught right from wrong.

Take today's youth for example, most parents today allow their kids to do whatever, no reprimand. What are they being taught? That they can do whatever they want and there are no consequences. Why not take the basics of good and bad and teach those to a machine?

Re:Real AI is still a long way out.. (1)

johanatan (1159309) | more than 5 years ago | (#25343283)

Yea, but 'right' and 'wrong' can also be hardcoded into the machine and thus save precious cycles and effort (by eliminating pure overhead). :-)

Re:Real AI is still a long way out.. (0)

Anonymous Coward | more than 5 years ago | (#25343831)

There was this neat game where robots "wanted" to get to a target position on a maze. And by adjusting the other factors (avoidance, aggressiveness, or attraction) you could affect how they developed in relation to their maze solving or goal seeking ability. The neat thing was that the robots adjusted their attributes by successive generations and some kind of evolutionary scheme. So you could punish a behavior by killing robots that exhibited it or reward a behavior by spawning more robots based on that behavior model. The result is that you could "teach" the robots to some modest extent by moving the goal around the maze and watchful "breeding". The funny thing was, robots that got "smart" also had associated files that grew geometrically in size. (Not sure how relevant that is, but might provide some kind of insight.)

Unfortunately the project seems to have frozen at the 2.0 stage of development. But it doesn't mean you still can't check it out. [nerogame.org]

Re:Real AI is still a long way out.. (1)

g-san (93038) | more than 5 years ago | (#25344377)

A goal of AI is intelligence and knowledge and increasing both. Every time an AI comes across a term it doesn't know and tries to associate it with something, it's implicitly wanting to "understand" something. Obviously that behaviour has to be programmed in, but it is a type of wanting. I don't know thrillbert... run a query and process thrillbert. Then it comes across your post, and realizes an AI weakness is that it doesn't want something, and that creates a ton more associations the system wants to figure out/solve to understand "want".

Great Book on AI (3, Insightful)

moore.dustin (942289) | more than 5 years ago | (#25342465)

Check out [onintelligence.org] this great book by Jeff Hawkins, creator of the Palm, called On Intelligence. His work is about understanding how the brain really works so that you can make truly intelligent machines. Fascinating stuff and firmly based in the facts of reality, which is refreshing to say the least.

Re:Great Book on AI (3, Funny)

QuantumG (50515) | more than 5 years ago | (#25344141)

He doesn't actually say anything in that book.

Re:Great Book on AI (0)

Anonymous Coward | more than 5 years ago | (#25347193)

He doesn't actually say anything in that book.

Come now. He says quite a bit, what you mean is either that his conclusions are too obvious, or you don't agree with them, or maybe that you don't believe strong AI is possible at all. But you can't say that he doesn't at least make a pretty concrete claim and lay out quite a bit of detail explaining his understanding of the situation.

In particular, his claim is that the root of human intelligence is a fairly repetitive but partially randomized structure in the neo-cortex, and that the fundamental algorithm it implements is within the realm of what we can simulate on a computer (with the obvious implication that to add "more magic" we just need to add more of these units or layers to the system, which makes for the pretty grand claim that once we understand and implement the cortical algorithm we can almost infinitely scale intelligence in an easily parallelizable way). Given that the actual amount of genetic material that has changed in order to give us the incredible intelligence we have is quite small (certainly smaller than the amount of neuronal connections that need to be specified to do the trick), the idea that most of our impressive intelligence stems from a repeating structure is not too far off the wall. Whether it's the neo-cortex or not is another debate, of course, and the specific claims that Hawkins makes about the way the cortical algorithm works are both a bit vague and up for plenty of criticism and discussion; I don't think he made a compelling enough case that you get "magic" for free based on the functional characteristics that he laid out, though he definitely suggests the seed of the idea by suggesting hierarchies of prediction as the main mechanism of intelligence.

Re:Great Book on AI (1)

uassholes (1179143) | more than 5 years ago | (#25345839)

If a being from another solar system drove here in it's spaceship that it's race built, I think it would have to be intelligent. But it's not human, so it's more interesting to know what intelligence is than to know how human brains work.

At best it's just one example of how to produce intelligence. At worst, it just a bunch of glop, as Richard Wallace says in the /. interview in 2002:

http://interviews.slashdot.org/article.pl?sid=02/07/26/0332225&tid=99 [slashdot.org]

Re:Great Book on AI (2, Interesting)

moore.dustin (942289) | more than 5 years ago | (#25346689)

Like I said, "His work is about understanding how the brain really works so that you can make truly intelligent machines." Your notion is dismissable as we could not fathom what we have no clue about, that being your alien. The best we can do on earth is to look at how the most intelligent species we know, humans, actually develop and groom their intelligence.

Intelligence defined is one thing, intelligence understood biologically is something else altogether.

Re:Great Book on AI (1)

uassholes (1179143) | more than 5 years ago | (#25349089)

We also have no clue about what intelligence is. Maybe if we stopped equating human thinking with intelligence, we might make some progress.

That is not meant to be a cynical remark. It may be now that AI researchers like Minsky have sobered up, they realize that if a machine passes for human, that doesn't make it intelligent.

And if a machine is intelligent, there is no reason to suppose that it could imitate a human. How many of us can imitate another species convincingly?

The sooner that we divorce our concept of intelligence from our concept of ourselves, then the sooner we might begin to understand what intelligence is and is not.

Re:Great Book on AI (1)

moore.dustin (942289) | more than 5 years ago | (#25349541)

and if you were to take a second to look at the book I was referring to, you would see that he does. He agrees with you that the Turing Test does not prove intelligence, though he thinks the quest to pass it has its merits. He is concerned with how intelligence is developed in brains, specifically human brains since we have consciousness. He looks at things not as "What makes me intelligent" as you _assumed_, but more importantly, "How does this thing(Brain) work? Seriously, you are arguing with me when I said almost nothing and chose to leave it to the user to look at the link and see for themselves. One of, if not the intro chapter to On Intelligence is a critique of 'standard practices' and how they are rubbish by and large.

Re:Great Book on AI (1)

uassholes (1179143) | more than 5 years ago | (#25350135)

I took a look at the site, and it looks like an interesting book, but as Richard Wallace says in the link I posted previously, you can't understand the OS by studying the transistors (of a CPU).

Please tag badsummary (1)

clarkkent09 (1104833) | more than 5 years ago | (#25342479)

So where is this "fascinating interview" that "shows what an interesting and colourful character Loebner is"?

lame post (1)

shakuni (644197) | more than 5 years ago | (#25343049)

Was it just me or most people felt that this was a lame post. Hardly anything to comment on and nothing "fascinating" about the interview. AI being used in "search" at google and DARPA Urban Challenge and in ooooo those "secret places" is supposed to be insightful or what.

Give me a break AI is far more interesting than this c***

sorry for being nasty but we can do better on slashdot.

Who the hell is Loebner in AI? (1)

cenc (1310167) | more than 5 years ago | (#25344905)

I have a masters degree in Philosophy of AI and Language, and have studied at one point or another every aspect of the field. He is far from any sort of founding father or leading thinker. He has done more than any one else to advance the field of bitchy refrigerators.

Re:Who the hell is Loebner in AI? (1)

rochi (930552) | more than 5 years ago | (#25345231)

While I agree that Loebner is an idiot; I'm working towards a masters degree in AI implementation; you can work on the philosophy of the thing after we've actually got some idea what it looks like. What we have right now are machines fitted to certain problem domains capable of taking broad unparsed input from those domains and spitting out answers without having those answers coded in; not the sort of thing that needs philosophy.

Next up, Butlerian Jihad. (1)

Captain Arr Morgan (958312) | more than 5 years ago | (#25344973)

"but also due to a desire to use technology to achieve a world where no human needs to work any longer."

What's the use of the Turing test? (1)

MarginalWatcher (1055844) | more than 5 years ago | (#25345305)

Sure, we can say that a machine that passes the Turing test is "intelligent". But then what? I mean, we *are* developing AI for the good of all mankind... right? We need AIs for doing things that humans can't/won't do. Chatting online does not seem to be one of them.

Don't take this wrong (2, Insightful)

localman (111171) | more than 5 years ago | (#25349207)

I'm fascinated by AI and our attempts to understand the workings of the mind. But these days, whenever I think about it, I end up feeling that a much more fundamental problem would be to figure out how to make use of the human minds we've already got that are going to waste. Some two hundred minds are born every minute -- each one a piece of raw computing power that puts our best technology to shame. Yet we haven't really figured out how to teach many of them properly, or how to get the most benefit out of them for themselves and for society.

If we create Artificial Intelligence, what would we even do with it? We've hardly figured out how to use Natural Intelligence :/

I don't mean to imply some kind of dilemma, AI research should of course go on. I'm just more fascinated with the idea of getting all this hardware we already have put to good use. Seems there's very little advancement going on in that field. It would certainly end up applying to AI anyways, when that time comes.

Cheers.

What a waste (1)

Aikiplayer (804569) | more than 5 years ago | (#25350391)

TFA is a complete waste of electrons and the time to consume it. Press releases have more content in them. If you're going to post something, link to something worthwhile.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>