Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Where's HAL 9000?

Soulskill posted more than 2 years ago | from the i'm-sorry-dave,-i'm-dolphin-grill-antifreeze dept.

AI 269

An anonymous reader writes "With entrants to this year's Loebner Prize, the annual Turing Test designed to identify a thinking machine, demonstrating that chatbots are still a long way from passing as convincing humans, this article asks: what happened to the quest to develop a strong AI? 'The problem Loebner has is that computer scientists in universities and large tech firms, the people with the skills and resources best-suited to building a machine capable of acting like a human, are generally not focused on passing the Turing Test. ... And while passing the Turing Test would be a landmark achievement in the field of AI, the test’s focus on having the computer have to fool a human is a distraction. Prominent AI researchers, like Google’s head of R&D Peter Norvig, have compared the Turing Test’s requirement that a machine fools a judge into thinking they are talking to a human as akin to demanding an aircraft maker constructs a plane that is indistinguishable from a bird."

cancel ×

269 comments

Sorry! There are no comments related to the filter you selected.

It's not just specialization, there is also fear (5, Interesting)

crazyjj (2598719) | more than 2 years ago | (#40111349)

He talks mostly in this article about how the focus has been on developing specialized software for solving specific problems and with specialized goals, rather than focusing on general AI. And it's true that this is part of what is holding general AI back. But there is also something that Loebner is perhaps loathe to discuss, and that's the underlying (and often unspoken) matter of the *fear* of AI.

For every utopian vision in science fiction and pop culture of a future where AI is our pal, helping us out and making our lives more leisurely, there is another dystopian counter-vision of a future where AI becomes the enemy of humans, making our lives into a nightmare. A vision of a future where AI equals, and then inevitably surpasses, human intelligence touches a very deep nerve in the human psyche. Human fear of being made obsolete by technology has a long history. And more recently, the fear of having technology become even a direct *enemy* has become more and more prevalent--from the aforementioned HAL 9000 to Skynet. There is a real dystopian counter-vision to Loebner's utopianism.

People aren't just indifferent or uninterested in AI. I think there is a part of us, maybe not even part of us that we're always conscious of, that's very scared of it.

Re:It's not just specialization, there is also fea (-1)

Lord Lode (1290856) | more than 2 years ago | (#40111535)

Yeah, people are just scared of the Technological Singularity [wikipedia.org] !

Re:It's not just specialization, there is also fea (3, Informative)

mcgrew (92797) | more than 2 years ago | (#40111709)

I'm not. AI is to real intelligence what margarine is to butter - it's artificial. It isn't real. You're never going to get a Turing computer to actually think, although some future chemical or something machine may.

However, you could get to the point where intelligence was simulated well enough that it appeard to be sentient. [wikipedia.org]

Which leads to what I fear, that people like those in PETA will start a "machine rights" movement, where it may be illegal for me to shut off a machine I built myself!

Luckily, I'm not likely to live long enough to see it. Some of you might, though.

Re:It's not just specialization, there is also fea (1, Interesting)

Anonymous Coward | more than 2 years ago | (#40111887)

Which leads to what I fear, that people like those in PETA will start a "machine rights" movement, where it may be illegal for me to shut off a machine I built myself!

Parents of many teenagers share your frustration at being unable to permanently turn off the machines they created.

Hell, Republicans want to make it illegal to shut one of those machines off before it can even function without support from a host machine.

Re:It's not just specialization, there is also fea (1)

Jeremiah Cornelius (137) | more than 2 years ago | (#40111911)

Mental State!=Computational State

Searle.

Chinese room.

http://www.youtube.com/watch?v=TryOC83PH1g [youtube.com]

Re:It's not just specialization, there is also fea (1)

mhajicek (1582795) | more than 2 years ago | (#40112029)

Re:It's not just specialization, there is also fea (1)

narcc (412956) | more than 2 years ago | (#40112321)

Counter fail!

Most "refutations" of the CRA fall in to four camps:

1) Deny it outright and posit a magical explanation (Systems reply)

2) Ignore the premise that the CR supports and pick on the illustration (most of the others)

3) Slip semantic content in and hope no one notices (robot reply)

4) Pretend that a particular system is not equivalent to other computational systems (ANNs are somehow different from TMs)

As it stands now, no one has shown that syntactic content is sufficient for semantic content.

Re:It's not just specialization, there is also fea (2)

Jeremiah Cornelius (137) | more than 2 years ago | (#40112343)

Is there a human unconscious?

Re:It's not just specialization, there is also fea (3, Insightful)

Baseclass (785652) | more than 2 years ago | (#40111989)

it's artificial. It isn't real. You're never going to get a Turing computer to actually think

Why not? We evolved into sentient beings from non-sentient organic matter, why couldn't the same thing be possible with silicon based intelligence?

Re:It's not just specialization, there is also fea (0)

ChetOS.net (936869) | more than 2 years ago | (#40112701)

Why not? We evolved into sentient beings from non-sentient organic matter, why couldn't the same thing be possible with silicon based intelligence?

Do you have any scientific basis for these claims or are you just making things up?

Re:It's not just specialization, there is also fea (2)

_8553454222834292266 (2576047) | more than 2 years ago | (#40112045)

Do you have any scientific basis for these claims or are you just making things up?

Re:It's not just specialization, there is also fea (5, Insightful)

Kielistic (1273232) | more than 2 years ago | (#40112201)

Computers can be used to model and compute chemical reactions. If a chemical can produce "thought" than nothing stops a computer from doing it other than computation power.

Re:It's not just specialization, there is also fea (5, Insightful)

similar_name (1164087) | more than 2 years ago | (#40112215)

You're never going to get a Turing computer to actually think, although some future chemical or something machine may.

Never say never :) It is hard to say whether an AI could ever accomplish thinking (or sentience) or not. It seems to be an emergent quality and I doubt whether it is chemical or electrical will matter much. And for the most part appearing sentient might as well be sentient. Outside of myself I can only assume others are sentient because they appear so and because we are genetically similar. There is not exactly a good standard or definition of what is or isn't sentient that doesn't depend on the bias of being human.

Re:It's not just specialization, there is also fea (1)

Hatta (162192) | more than 2 years ago | (#40112695)

It is hard to say whether an AI could ever accomplish thinking (or sentience) or not.

It's obvious that AI can exist. What's not obvious is whether we'll ever be smart enough to manufacture one. This is a similar position to the situation with extraterrestrial life. It's almost certain that it exists, but completely impractical to expect to ever contact.

Re:It's not just specialization, there is also fea (3, Insightful)

jpate (1356395) | more than 2 years ago | (#40112339)

I'm not. AI is to real intelligence what margarine is to butter - it's artificial. It isn't real. You're never going to get a Turing computer to actually think, although some future chemical or something machine may.

Why do you think that? Silicon is also a chemical. There's nothing magical about liquid chemicals.

Cognitive scientists typically try to analyze cognitive systems in terms of Marr's levels of analysis [wikipedia.org] . Cognitive systems solve some problem (the computational level) through some manipulation of percepts and memory (the algorithmic/representational level) using some physical system (the implementational level). The mapping from neurons and chemical slushes to algorithms is extremely complex, so most work focuses on providing a computational level characterization of the problem, occasionally proposing a specific algorithm. Since the same computational goal can be accomplished by different algorithms (compare bubblesort to quicksort, or particle filters to importance sampling, or audio localization in owls to audio localization in cats), and the same algorithm can be run with different implementations (consider the same source code compiled for ARM or x86), it's just a waste of time and energy to insist that we recover all of the computational, algorithmic, and implementational details simultaneously.

However, you could get to the point where intelligence was simulated well enough that it appeard to be sentient. [wikipedia.org]

I've never found the Chinese room argument convincing. It just baldly asserts "of course the resulting system is not sentient!" Why not?

I disagree with the article. People haven't given up on strong AI, we've just realized that it is enormously more difficult than we originally thought. If today's best minds were to attack the problem, we'd end up with a hacked-together system that barely worked. Asking why computer scientists aren't working on strong AI is like asking why physicists aren't working on intergalactic teleportation: it's really really hard and there's a lot to accomplish on the way.

Re:It's not just specialization, there is also fea (1)

ColdWetDog (752185) | more than 2 years ago | (#40112353)

Which leads to what I fear, that people like those in PETA will start a "machine rights" movement, where it may be illegal for me to shut off a machine I built myself!

You're afraid of these people [peta.org] ? I spend more time lying awake worrying about my Furby.

Re:It's not just specialization, there is also fea (2, Interesting)

Anonymous Coward | more than 2 years ago | (#40112401)

We're machines. Very nice ones, but machines. We have information storage, base programming, learning and sensory input. All of this happens by use of our real, observable, bodily mechanisms. As far as I know there's no evidence to the contrary (read as: magic).

So it follows that, assuming we can eventually replicate the function of any real, observable mechanism, there's no reason why we can't recreate genuine, humanesque intelligence. Whether the component hardware is "wet" or not is just a manufacturing detail of meeting specs.

But yeah, AI work like we're talking about is a magic show. Shortcuts. Simulating the output of a machine that doesn't actually exist. We're faking symptoms, the best ways we know how. A magic trick can only be perfected so much before you've got to actually do the thing you've been pretending to do.

Re:It's not just specialization, there is also fea (2)

History's Coming To (1059484) | more than 2 years ago | (#40112577)

My mum and dad made an AI with its own biotech robot, and that's just with a metallurgy PhD and a Home Economics degree. It's not bad, it's been running for about 35 years non-stop, and bar a minor glitch with the tonsils and a slightly buggy human interaction module nothing has gone too badly wrong. It's virtually indistinguishable from a "real" human and some have even accused it of being sarcastic. I challenge anyone to prove it doesn't actually think (although it's not sure about that myself).

Re:It's not just specialization, there is also fea (2)

JDG1980 (2438906) | more than 2 years ago | (#40112565)

I'm not. AI is to real intelligence what margarine is to butter - it's artificial. It isn't real. You're never going to get a Turing computer to actually think, although some future chemical or something machine may. However, you could get to the point where intelligence was simulated well enough that it appeard to be sentient.

Intelligence isn't a physical thing – it's a process. It makes no difference whether that process happens in meat or in silicon. This is why Searle is a moron. Any argument against artificial intelligence is actually a disguised argument in favor of Cartesian dualism. If you reject the notion that there is a "ghost in the machine," then it logically follows that the brain is a physical object, an organic computer, and strong AI must be possible.

Re:It's not just specialization, there is also fea (0)

Anonymous Coward | more than 2 years ago | (#40111853)

People are more annoyed by singularitarians than afraid of their prophesied nerd rapture.

Turing Test is a Joke (3, Insightful)

Jeremiah Cornelius (137) | more than 2 years ago | (#40111981)

It's asking for the world's best stage magician to create real hovering women.

"If you REALLY fool me, it will be true!"

Nonsense.

Re:It's not just specialization, there is also fea (1)

Dexter Herbivore (1322345) | more than 2 years ago | (#40111621)

He talks mostly in this article about how the focus has been on developing specialized software for solving specific problems and with specialized goals, rather than focusing on general AI. And it's true that this is part of what is holding general AI back. But there is also something that Loebner is perhaps loathe to discuss, and that's the underlying (and often unspoken) matter of the *fear* of AI.

Does that have anything to do with the progress of research? I doubt that AI researchers themselves are afraid of spawning a 'true' AI, I would think it has more to do with the practicality of the technology and resources available.

Well I Disagree (4, Insightful)

eldavojohn (898314) | more than 2 years ago | (#40111623)

He talks mostly in this article about how the focus has been on developing specialized software for solving specific problems and with specialized goals, rather than focusing on general AI. And it's true that this is part of what is holding general AI back.

No, that's not true ... that's not at all what is holding "general AI" back. What's holding "general AI" back is that there is no way at all to implement it. Specialized AI is actually moving forward the only way we know how with actual results. Without further research in specialized AI, we would constantly get no closer to "generalized AI" and I keep using quotes around that because it's such a complete misnomer and holy grail that we aren't going to see it any time soon.

When I studied this stuff there were two hot approaches. One was logic engines and expert systems that could be generalized to the point of encompassing all knowledge. Yeah, good luck with that. How does one codify creativity? The other approach was to model neurons in software and then someday when we have a strong enough computers, they will just emulate brains and become a generalized thinking AI. Again, the further we delved into neurons the more we realized how wrong our basic assumptions were -- let alone the infeasibility to emulating the cascading currents across them.

"General AI" is holding itself back in the same way that "there is no such thing as a free lunch" is holding back our free energy dreams.

But there is also something that Loebner is perhaps loathe to discuss, and that's the underlying (and often unspoken) matter of the *fear* of AI.

We're so far from that, it humors to me to hear questions and any semi-serious question regarding it. It is not the malice of an AI system you should fear, it is the manifestation of the incompetence of the people who developed it that results in an error (like sounding an alarm because a sensor misfired and responding by launching all nuclear weapons since that what you perceive your enemy to have just done) that should be feared!

People aren't just indifferent or uninterested in AI. I think there is a part of us, maybe not even part of us that we're always conscious of, that's very scared of it.

People are obsessed by the philosophical and financial prospects of an intelligent computer system but nobody's telling me how to implement it -- that's just hand waving so they can get to the interesting stuff. Right now, rule based systems, heuristics, statistics, Bayes' Theorem, Support Vector Machines, etc will get you far further than any system that is just supposed to "learn" any new environment. All successful AI to this point has been built with the entire environment in mind during construction.

Re:It's not just specialization, there is also fea (1)

Jeng (926980) | more than 2 years ago | (#40111673)

In some stories AI's are both enemies and friends.

http://www.schlockmercenary.com/2003-07-28 [schlockmercenary.com]

The issue is once an AI truly has that Intelligence part down then you get into it's motivations, and that is the part that scares people.

Can you trust the motivations of someone who is not only smarter than you, but doesn't value the same things you do in the same ways?

Whether it be a person or a machine the question comes up, and it's not a question that can truly be answered except in specific circumstances.

NO NO AND NO (4, Insightful)

gl4ss (559668) | more than 2 years ago | (#40112031)

it's not fear.
it's not "we could do it but we just don't want to".
it's not "the government has brains in a jar already and is suppressing research".
those are just excuses which make for sometimes good fiction - and sometimes a career for people selling the idea as non-fiction.

but the real reason is that it is just EXTRA FRIGGING HARD.
it's hard enough for a human who doesn't give a shit to pass a turing test. but imagine if you could really do a turing machine that would pass as a good judge, politician, network admin, science fiction writer... or one that could explain to us what intelligence really even is since we are unable to do it ourselves.

it's not as hard/impossible as teleportation but close to it. just because it's been on scifi for ages doesn't mean that we're on the verge of a real breakthrough to do it, just because we can imagine stories about it doesn't mean that we could build a machine that could imagine those stories for us. it's not a matter of throwing money to the issue or throwing scientists to it. some see self learning neural networks as a way to go there, but that's like saying that you only need to grow brain cells in a vat while talking to it and *bam* you have a person.

truth is that there's shitloads of more "AI researchers" just imagining ethical wishwashshitpaz implications what would result from having real AI than those who have an idea how to practically build one. simply because it's much easier to speculate on nonsense than to do real shit in this matter.
(in scifi there's been a recent trend to separate things to virtual intelligences which are much more plausible, which are just basically advanced turing bots but wouldn't really pass the test, which is sort of refreshing)

Re:It's not just specialization, there is also fea (0)

Anonymous Coward | more than 2 years ago | (#40112055)

Untill we understand how a gaint cluster of nurons in our own head makes inteligence. How can we replicate that in software?
Its like being given 1 billion legos and being asked to make a statue of liberty replica, but with no guide or pictures.

Re:It's not just specialization, there is also fea (1)

Haxagon (2454432) | more than 2 years ago | (#40112563)

We can do our best to create a neural network or other type of network that accomplishes the same level of consciousness but in a different way.

Re:It's not just specialization, there is also fea (2)

Kugrian (886993) | more than 2 years ago | (#40112161)

Maybe we just don't need it? Our closest apps to AI are Siri and whatever the Android voice app is. All they do is retrieve information. Same as a google search. Nearly everyone under 30 (and quite a few over that) grew up with computers and most know how to use them. True turing AI at this point would only really benefit people who don't know how to find information themselves.

Re:It's not just specialization, there is also fea (1)

gl4ss (559668) | more than 2 years ago | (#40112391)

bullshit. true turing ai could do your homework. it would be really, really useful in sorting tasks, evaluating designs, coming up with mechanical designs.. it's just that people don't usually think too far when they think of the turing test.

imagine if your turing test subject was torvalds at 20 years old. imagine if you had a machine that could fool you by cobbling together a new operating system for you. an advanced enough turing test could supply you with plenty of new solutions to problems and another turing test machine could evaluate if those solutions are any good. That's the kind of leap true AI would be - and we're seemingly centuries away from that. because you can express with text most things you could express by pictures too(sure, it takes more effort but still).

Re:It's not just specialization, there is also fea (2)

dissy (172727) | more than 2 years ago | (#40112741)

Our closest apps to AI are Siri and whatever the Android voice app is. All they do is retrieve information. Same as a google search.

I would say the closest "app" to what you describe, that would still fall under the category of specialized AI, would be Watson [wikipedia.org] .
It too is a huge information retrieval system, but specifically designed to play Jeopardy and play it well. It already bested the top two human players.

Of course it is still only a specialized AI engine, no where NEAR expert AI, and it most certainly does not think. Hell, it can't even read visually, see, hear, or a lot of other things required to truly play a game of Jeopardy. But it is leaps and bounds more complex and advanced than Siri currently is!

To me, Siri is nothing more than a good voice recognition app combined with Wolfram Alpha [wolframalpha.com] .
I don't mean to be belittling Siri in general, but in this comparison it is hard not to.

Re:It's not just specialization, there is also fea (1)

mphare (741524) | more than 2 years ago | (#40112641)

So, is the Turing Test moot? I wonder.. maybe the real test is not can a computer fool a human into believing it's another human, but rather, can a human fool a computer into believing it's another computer!

Re:It's not just specialization, there is also fea (0)

Anonymous Coward | more than 2 years ago | (#40112761)

I feel no fear of strong AI. I believe that creating strong AI can surpass us would be our greatest legacy, even if it turns out not so well for us humans. Imagine all that a vastly superior intelligence could accomplish. Imagine an intelligence which has none of our human frailties.

Why not Zoidbe^H^H Watson? (2)

sl4shd0rk (755837) | more than 2 years ago | (#40111361)

Re:Why not Zoidbe^H^H Watson? (1)

Anonymous Coward | more than 2 years ago | (#40111493)

Siri, open the pod bay doors.

Re:Why not Zoidbe^H^H Watson? (3, Funny)

Moheeheeko (1682914) | more than 2 years ago | (#40111745)

Im afraid Apple wont let me do that, Dave.

Re:Why not Zoidbe^H^H Watson? (3, Funny)

spire3661 (1038968) | more than 2 years ago | (#40112009)

She gets all huffy when you ask her that.

Re:Why not Zoidbe^H^H Watson? (2)

Kotoku (1531373) | more than 2 years ago | (#40112295)

Sorry Dave, I have a headache.

AI research is haunted... (4, Interesting)

betterunixthanunix (980855) | more than 2 years ago | (#40111367)

Too many decades of lofty promises that never materialized has turned "AI research" into a dirty word...

Re:AI research is haunted... (2, Funny)

Anonymous Coward | more than 2 years ago | (#40111509)

The operator said that AI Research is calling from inside the house...

HAL? (4, Funny)

Anonymous Coward | more than 2 years ago | (#40111371)

Forget HAL, where is Cherry 2000!

just one question (0)

Anonymous Coward | more than 2 years ago | (#40111381)

And while passing the Turing Test would be a landmark achievement in the field of AI, the test’s focus on having the computer have to fool a human is a distraction. Prominent AI researchers, like Google’s head of R&D Peter Norvig, have compared the Turing Test’s requirement that a machine fools a judge into thinking they are talking to a human as akin to demanding an aircraft maker constructs a plane that is indistinguishable from a bird.".

Is it because compared the Turing Test's requirement that a machine fools a judge that you say an aircraft maker constructs a plane?

Unions (0)

Anonymous Coward | more than 2 years ago | (#40111383)

Who wants a machine that can get booed, or join a union and ask for fair pay? Even if the human type AI is computationally cheaper you still do not want to use it!

Re:Unions (1)

Haxagon (2454432) | more than 2 years ago | (#40112589)

Sentient machine-based lifeforms don't have to be the only machines, just a few of them. You can have advanced AI machinery, or machinery with no AI at all that doesn't need the same protection that sentient forms do.

Too hard (3, Insightful)

Hatta (162192) | more than 2 years ago | (#40111415)

Strong AI has always been the stuff of sci-fi. Not because it's impossible, but because it's impractically difficult. We can barely model how a single protein folds, with a world wide network of computers. Does anyone seriously expect that we can model intelligence with similar resources?

Evolution has been working on us for millions of years. It will probably take us hundreds or thousands before we get strong AI.

Re:Too hard (1)

JoeMerchant (803320) | more than 2 years ago | (#40111495)

It takes us over 5 years to train most humans well enough to pass a Turing test, reasonable to think that it might take longer to train a machine.

Re:Too hard (1)

na1led (1030470) | more than 2 years ago | (#40112149)

I don't think it would require lots of resources to model true AI, the difficulty is figuring out how its done. It's similar to how GPS works. Once you understand the physics, it's easy to make use of it.

The bots were trying too hard (1)

BanHammor (2587175) | more than 2 years ago | (#40111427)

Thing is, bots of previous years were always too "robotic" in their speech. Bots of this year were too fluent, ready to face any talk and continue it. Can we teach the bots moderation?

The same place you'll find Jetpacks, Flying cars. (0)

Anonymous Coward | more than 2 years ago | (#40111451)

It's hard to predict the difficulty of some tasks. AI won't happen anytime soon because we've learned that it's a task hundreds of orders of magnitude more complex than we thought a few decades ago.

On the other hand, I can buy a smart phone just about anywhere that can, at a whim, have a video chat with anyone, anywhere in the world, connected wirelessly to a ubiquitous universal global computer network. That, and my smart phone is many times more powerful than old mainfraim computers that used to cost millions of dollars. And it holds all my music. And replaces my camera. And makes phone calls. Somewhere we overshot the Dick Tracy video watch and we didn't even notice.

Dijkstra said it best (5, Insightful)

dargaud (518470) | more than 2 years ago | (#40111467)

"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."

Re:Dijkstra said it best (4, Interesting)

LateArthurDent (1403947) | more than 2 years ago | (#40111811)

"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."

I can see the point, but that also applies to humans. There's a whole lot of research going on to determine exactly what it means for us to "think." A lot of it implies that maybe what we take for granted as our reasoning process to make decisions might just be justification for decisions that are already made. Take this experiment, which I've first read in The Believing Brain [amazon.com] , and found it also described in this site [timothycomeau.com] when I googled for it.

One of the most dramatic demonstrations of the illusion of the unified self comes from the neuroscientists Michael Gazzaniga and Roger Sperry, who showed that when surgeons cut the corpus collosum joining the cerebral hemispheres, they literally cut the self in two, and each hemisphere can exercise free will without the other one’s advice or consent. Even more disconcertingly, the left hemisphere constantly weaves a coherent but false account of the behavior chosen without it’s knowledge by the right. For example, if an experimenter flashes the command “WALK” to the right hemisphere (by keeping it in the part of the visual field that only the right hemisphere can see), the person will comply with the request and begin to walk out of the room. But when the person (specifically, the person’s left hemisphere) is asked why he just got up he will say, in all sincerity, “To get a Coke” – rather than, “I don’t really know” or “The urge just came over me” or “You’ve been testing me for years since I had the surgery, and sometimes you get me to do things but I don’t know exactly what you asked me to do”.

Basically, what I'm saying is that if all you want is an intelligent machine, making it think exactly like us is not what you want to do. If you want to transport people under water, you want a submarine, not a machine that can swim. However, researchers do build machines that emulate the way humans walk, or how insects glide through water. That helps us understand the mechanics of that process. Similarly, in trying to make machines that think as we do, we might understand more about ourselves.

Re:Dijkstra said it best (1)

narcc (412956) | more than 2 years ago | (#40112675)

Yeah, split-brain != split mind

Put down the pop-sci books and go check out the actual research. That particular conclusion isn't supported by the evidence at all.

humans have a compulsion to communicate (2)

peter303 (12292) | more than 2 years ago | (#40112013)

Thats is why we seek out each other and other intelligences in the universe. Steven Pinker captured the gist in calling it The Language Instinct. Humans go more or less crazy in perpetual, involuntary solitude.

A computer intelligence is probably the best long term prospect for an interesting intelligence to communicate with. We've been trying for a long time to communication with animals, spiritual beings and aliens. But these have not really panned out. A "hard A.I." would be something interesting to talk to.

Re:Dijkstra said it best (3, Insightful)

Darinbob (1142669) | more than 2 years ago | (#40112107)

A problem is that terms like "intelligence" and "reason" are very vague. People used to think that a computer could be considered intelligent if it could win a game of chess against a master, but when that has happened then it's dismissed because it's just databases and algorithms and not intelligence.

The bar keeps moving, and the definitions change, and ultimately the goals change. There's a bit of superstition around the word "intelligence" and some people don't want to use it for something that's easily explained, because intelligence is one of the last big mysteries of life. The original goal may have been to have computers that indeed do operate in less of a strictly hardwired way, not following predetermined steps but deriving a solution on its own. That goal has succeeded decades ago. I would consider something like Macsyma to truthfully be artificial intelligence as there is some reasoning and problem solving, but other people would reject this because it doesn't think like a human and they're using a different definition of "intelligence". Similarly I think modern language translators like those at Google truthfully are artificial intelligence, even though we know how it works.

The goals of having computers learn and adapt and do some limited amount of reasoning based on data have been achieved. But the goals change and the definitions change.

Back in grad school I mentioned to an AI prof how some advances I saw in the commercial world about image recognition software and he quickly dismissed it as uninteresting because it didn't use artificial neural networks (the fad of that decade). His idea of artificial intelligence meant emulating the processes in brains rather than recreating the things that brains can do in different ways. You can't really blame academic researchers for this though, they're focused in on some particular idea or method that is new while not being as interested in things that are well understood. You don't get research grants for things people already know how to do.

That said, the "chat bot" contests are still useful in many ways. There is a need to be quick, a need for massive amounts of data, a need for adaptation, etc. Perhaps a large chunk of it is just fluff but much of it is still very useful stuff. There is plenty of opportunity to plug in new ideas from research along with old established techniques and see what happens.

Re:Dijkstra said it best (0)

Anonymous Coward | more than 2 years ago | (#40112543)

There's the running joke in AI that once it works, it's no longer AI.

Similarly I think modern language translators like those at Google truthfully are artificial intelligence, even though we know how it works.

Machine translation is clearly an AI problem, but none of the existing systems are terribly good at it. Of course, they are good enough that you can often more or less figure out what the original text is about, so they are plenty useful.

That said, the "chat bot" contests are still useful in many ways. There is a need to be quick, a need for massive amounts of data, a need for adaptation, etc. Perhaps a large chunk of it is just fluff but much of it is still very useful stuff. There is plenty of opportunity to plug in new ideas from research along with old established techniques and see what happens.

True. Also, a working chat bot that could hold a conversation would be amazingly useful for first-level customer support. A large amount of customer support is already just reading off a script (and some of it already uses IM-like systems); having a chat bot handle customer support requests would mean not having to wade through an FAQ or wait on hold. That would be a huge plus in my book.

Too Narrow (2)

getto man d (619850) | more than 2 years ago | (#40111475)

I would argue that placing emphasis only on the Turing test itself is a distraction from the broad field of AI. For example, there is a ton of really cool work coming from various labs ( http://www.ias.informatik.tu-darmstadt.de/ [tu-darmstadt.de] , http://www.cs.berkeley.edu/~pabbeel/video_highlights.html [berkeley.edu] ).

There are many achievements met and progress made, e.g. Peters group's ping pong robot, just not the ones researchers promised many years ago.

Do Androids dreams about the Turing Award? (1)

G3ckoG33k (647276) | more than 2 years ago | (#40111485)

Can androids win the Darwin Award, even if they have won the Turing Award?

Yes. Most likely.

Intelligence is not necessarily a prerequisite for being human.

Sentience vs. Intelligence (5, Interesting)

msobkow (48369) | more than 2 years ago | (#40111523)

I tend to think we need to split out "Artificial Sentience" from "Artificial Intelligence." Technologies used for expert systems are clearly a form of subject-matter artificial intelligence, but they are not creative nor are they designed to learn about and explore new subject materials.

Artificial Sentience, on the other hand, would necessarily incorporate learning, postulation, and exploration of entirely new ideas or "insights." I firmly believe that in order to hold a believable conversation, a machine needs sentience, not just intelligence. Being able to come to a logical conclusion or to analyze sentence structures and verbiage into models of "thought" are only a first step -- the intelligence part.

Only when a machine can come up with and hold a conversation on new topics, while being able to tie the discussion history back to earlier statements so that the whole conversation "holds together" will be able to "fool" people. Because at that point, it won't be "fooling" anyone -- it will actually be thinking.

Re:Sentience vs. Intelligence (2)

mcgrew (92797) | more than 2 years ago | (#40112275)

Only when a machine can come up with and hold a conversation on new topics, while being able to tie the discussion history back to earlier statements so that the whole conversation "holds together" will be able to "fool" people. Because at that point, it won't be "fooling" anyone -- it will actually be thinking.

No, it will stil be smoke and mirrors. Magicians are pretty clever at making impossible things appear to happen, tricking a human into believing a machine is sentient is no different. Look up "Chinese room".

Re:Sentience vs. Intelligence (0)

Anonymous Coward | more than 2 years ago | (#40112527)

There's no proof you aren't a Chinese room, so your opinion is moot. You can either accept that other things may in fact be able to think or that other things are incapable, because the argument is the same for a human and a computer.

Re:Sentience vs. Intelligence (0)

Anonymous Coward | more than 2 years ago | (#40112631)

The Chinese room is an interesting and useful thought experiment, but it cannot be used to claim computers cannot think. The meaninglessness [wikipedia.org] section of the Wikipedia article covers this, specifically the reference to the problem of other minds [wikipedia.org] . In short, it's not clear how assertion that the Chinese room doesn't think is different from asserting that guy over there is a philosophical zombie [wikipedia.org] . The only difference is that you can provide a description of how the Chinese room works and modern science has yet to fully describe the human brain.

Re:Sentience vs. Intelligence (1)

narcc (412956) | more than 2 years ago | (#40112773)

I tend to think we need to split out "Artificial Sentience" from "Artificial Intelligence."

Not familiar with the field at all, are you?

hmmmm sounds familiar? (0)

Anonymous Coward | more than 2 years ago | (#40111589)

Didn't I discuss this with the NSA's AI???

AI and chess (4, Insightful)

Zontar_Thing_From_Ve (949321) | more than 2 years ago | (#40111593)

Back in the early 1950s, it was thought that the real prize of AI was to get a computer able to beat the best human chess player consistently. The reasoning at the time was that the only way this would be possible was for breakthroughs to happen in AI where a computer could learn to think and could reason better at chess than a human. Fast forward to 10 or so years ago where IBM realized that just by throwing money at the problem they could get a computer to play chess by brute force and beat the human champion more often than not. So I'm not surprised that some AI people discount the Turing test. I am not an expert in the field but it seems to me that AI is a heck of a lot harder than anybody realized in the 1950s and we may still be decades or even centuries away from the kind of AI that people 60 or so years ago thought we'd have by now. Part of me does wonder if maybe just like how AI research in chess took the easy way out by resorting to brute force that now it's they'll just say the Turing test is not valid rather than actually try to achieve it because to pass it would require breakthroughs nobody has thought of yet and that's hard.

Re:AI and chess (2)

na1led (1030470) | more than 2 years ago | (#40112069)

Chess is a very different kind of AI. Games like this rely on weighing patterns in a matrix, very similar to statistical probability solving, which can easily be done on paper. True AI is where programs have the ability to evolve and change, and maybe even rewrite is own code. I don't think we have the ability to do that yet, as I'm sure it wouldn't require millions of lines of code.

Re:AI and chess (1)

thaig (415462) | more than 2 years ago | (#40112413)

I have thought similarly. I don't see how we can make true use of robots if they don't understand us. To understand us, to predict or anticipate what we need, I think they have to have some common experience otherwise it would take forever to explain what you want precisely enough. Without understanding they would be very annoying in the same way that it is when you try to work with people whose culture is greatly at odds with yours so that you can never quite interpret what they mean.

This kind of thing might eventually help:
http://apt.cs.man.ac.uk/projects/SpiNNaker/ [man.ac.uk]

The Problem is the Definition of AI (3, Insightful)

medv4380 (1604309) | more than 2 years ago | (#40111625)

Artificial Intelligence is just that Artificial. Big Blue has zero actual Intelligence, but has plenty of ways of accomplishing a task (chess) that usually requires actual Intelligence. The article has confused Machine Intelligence and Machine Learning with Artificial Intelligence. The problem is that in those areas no one is "best suited". If we knew what we needed to do for Machine Intelligence to work then we'd have a Hal 9000 by now. Instead we have Watson, though impressive, is a long way away from Hal.

A quest for the robotic birds (2)

pr0t0 (216378) | more than 2 years ago | (#40111677)

Festo's Smartbird is hardly indistinguishable from a real bird, but it is much more so than say da Vinci's ornithopter. A slow and steady progress can be charted from the former to the latter. At some point in the future, the technology will be nearly indistinguishable from a real bird, thus passing the "Norvig Test".

That's the whole point of the Turing Test; it's supposed to be hard and maybe even impossible. It doesn't test whether current AI is useful, it tests if AI is indistinguishable from a human. That's a pinnacle moment, and one that bestows great benefits as well as serious implications.

Personally, I think it will happen; maybe not for 50, 100, 500 years...but it will happen.

What of it? (1)

Anonymous Coward | more than 2 years ago | (#40111689)

Combine Siri with a humanoid robot that can beat Kasparov, hold a drivers license, fly anything, dance, stock shelves, clean house, win Jeopardy, fill a fast food order and autonomously hunt Taliban (all of which has already been done,) and you've got something that is already more compelling than some large fraction of humanity. Put tits on it and get out of the way.

Hell, HAL 9000 isn't even that interesting anymore.

Turing is like a manned space mission to Mars (0)

Anonymous Coward | more than 2 years ago | (#40111727)

Something that can capture people's imaginations regardless of their age or technical literacy (and by extension, those of taxpayers and angel investors), can bring out everyone's competitive urges, with the winner (if any) be decided relatively cleanly.

Principles of how the ... (0)

Anonymous Coward | more than 2 years ago | (#40111767)

... mind actually works need to be reverse engineered before we get anything approaching 'real ai'. We'll get some nice AI tools in the mean time but we need a theory behind what intelligence actually IS before we can go and build it.

We didnt reach artificial intelligence yet... (0)

gmuslera (3436) | more than 2 years ago | (#40111797)

but instead we got plenty of natural stupidity. Idiocracy is a better prediction of the future than 2001.

Peter Norvig's analogy is flawed (1)

Anonymous Coward | more than 2 years ago | (#40111803)

What he really tried to say was: the Turing Test's requirement that a machine fool a judge into thinking they are talking to a human is like demanding an aircraft maker construct a vehicle that fools a human into thinking they are a bird.

If your AI doesn't fool anyone into thinking anything other than "this isn't a real person" you fail. Deal with it by improving your AI, not complaining about the requirements. I believe there is quite a long history of Norvig-like failures in the field of human flight. The difference is that in the field of human flight many failures have been overcome, and as a result today we can actually fly.

Arguing about semantics when you fail to pass test (read: meet the requirements) is a cop out. We can/do demand aircraft makers construct vehicles that indistinguishably traverse the medium known as the atmosphere as akin to a flight capable ornis.

I bet $1 that something like IBM's AI stuff would come closer to fooling me than Peter Norvig...

Still looking... (1)

gstrickler (920733) | more than 2 years ago | (#40111825)

for signs of natural intelligence.

Wouldn't this take phishing to another level? (0)

Anonymous Coward | more than 2 years ago | (#40111843)

This would be great!! Marketers could have a bunch of AI devices that could endlessly harass and try to sell you anything. They could lie without the slightest sense of irony or empathy for their victims. When victims attempted to sue, the owners could just state that it was a software bug and they're only accountable for a nominal fee (the price of a phone call).

Eventually, these AI devices would completely infiltrate the labor pool replacing human employees at every level. Wow! Wouldn't it be great to have a completely non-emotional AI device as your manager?

do we really need computer AI? (1)

alen (225700) | more than 2 years ago | (#40111915)

computers are so good at doing repetitive monkey work that most people don't like to do

Re:do we really need computer AI? (1)

PPH (736903) | more than 2 years ago | (#40112497)

We'll know when true AI has arrived. When we give a computer one of these mind-numbing tasks and it says, "Kiss my shiny metal ass".

True AI would dominate the world (3, Insightful)

na1led (1030470) | more than 2 years ago | (#40111927)

If a computer could think for itself, and solve problems on its own, it would logically conclude the fate of humans in less than a second. Unless we could confine that intelligence so it can't access the Internet, than those who posses the technology would rule the world. Either way, super intelligence is bad for humans.

Re:True AI would dominate the world (0)

Anonymous Coward | more than 2 years ago | (#40112371)

If a computer could think for itself, and solve problems on its own, it would logically conclude the fate of humans in less than a second.

What are you basing this grandiose claim on?

An average ant has 250,000 brain cells. An average human has 100 billion. Does that difference in power mean humans can logically conclude the fate of ants in less than a second? Does that mean humans would even care to?

Re:True AI would dominate the world (0)

Anonymous Coward | more than 2 years ago | (#40112721)

Indeed. A boot walks over ants without given a shit about them. The boot, like the wearer, doesn't bother to consider the welfare of the dead ants. Your point fails.

Strong AI is possible. But... (0)

Anonymous Coward | more than 2 years ago | (#40111953)

Why do we think that a computer is going to have to think or converse like a human? Computers and humans are different. Therefore strong AI in a computer will manifest differently than intelligence in a human.

Cheap *real* "intelligence" (1)

jemenake (595948) | more than 2 years ago | (#40111963)

The reason the quest for good AI has waned is because all of the stuff you'd use it on can be done just as cheaply through MechanicalTurk or by hiring a bunch of dudes in India to do it.

Homicidal AI's? (1)

T.E.D. (34228) | more than 2 years ago | (#40112003)

Umm... HAL-9000 was homicidal. Are we really acking for that?

Re:Homicidal AI's? (2)

citizenr (871508) | more than 2 years ago | (#40112309)

Umm... HAL-9000 was homicidal.

No he wasnt, he was just misunderstood.

Farming gold in Wow (1)

Snaller (147050) | more than 2 years ago | (#40112025)

Though once the real money auction house opens in Diablo 3 he'll move over there.

Turing (2)

Impy the Impiuos Imp (442658) | more than 2 years ago | (#40112085)

Ok, the Turing Test was a thought experiment, and not intended to be a real-world filter for useful AI. Clearly non-humanlike general-purpose intelligence would be useful regardless of the form.

The test was a thought experiment to throw down the gauntlet to cs philosophers - how would you even know another human skull, aside from yourself, was conscious or not? It doesn't even really have anything to do with intelligence per se so much as illustrating the difference between intelligence and conscious intelligence. Hence the Chinese Room, q.v.

Re:Turing (0)

Anonymous Coward | more than 2 years ago | (#40112127)

Don't tell these neckbeards that... they believe comic book heros are real.

Hal 9000 wouldn't pass the Turing test (1)

Chris Walker (135667) | more than 2 years ago | (#40112097)

So even if Hal 9000 were here, we'd still not have a computer that could fool someone into thinking it was human. At least not with the voice they were using. Also, it was far too polite, while it was killing you.

What is holding back AI? (1)

Lumpy (12016) | more than 2 years ago | (#40112135)

Processing Power. We just dont have enough yet..

But it's getting really close. Cripes we are doing things today in our pocket that only 25 years ago was utterly impossible on a $20billion dollar mainframe.

If the rate of Growth in processing power continues we will have a computer with the human brain level of processing within 20 years. If we get a breakthrough or two, it could be a whole lot sooner.

What the human brain does is massive. Just the processing in the visual cortex is utterly insane in horsepower.

Re:What is holding back AI? (2)

na1led (1030470) | more than 2 years ago | (#40112271)

Actually the processing speed of our brains is very slow, it's just very efficient at what it does. We don't need faster computers, we need them to be efficient. A well written piece of code could perform better on a Commodore 64, than a poorly written one on a Super Computer.

Re:What is holding back AI? (1)

Lumpy (12016) | more than 2 years ago | (#40112535)

Processing power does not equal speed.

Wrong way around. (1)

jythie (914043) | more than 2 years ago | (#40112197)

I think the author has the wrong end of the stick here. We have not abandoned strong AI and the turing test to focus on more specialized systems.. we are focusing on more speciazlied systems because we have figured out that this is a really damn hard problem, and the optimistic hopes that it would be solved quickly have given way to attacking it one step at a time. Researchers are still very interested in the long term goal, but those in the field who are "best-suited to building a machine capable of acting like a human" know at this point that such a system is not going to emerge fully formed out of some god's head.... it is going to take decades of hard work solving less sexy component problems first. Gotta learn to crawl before you can walk, and the mid 20th century hope that we would go strait to super human marathon runners is long dead.. and good riddance.

Wrong Question asked out of ignorance (5, Interesting)

cardhead (101762) | more than 2 years ago | (#40112207)

These sorts of articles that pop up from time to time on slashdot are so frustrating to those of us who actually work in the field. We take an article written by someone who doesn't actually understand the field, about an contest that has always been no better than a publicity stunt*, which triggers a whole bunch of speculation by people who read Godel, Escher, Bach and think they understand what's going on.

The answer is simple. AI researchers haven't forgotten the end goal, and it's not some cynical ploy to advance an academic career. We stopped asking the big-AI question because we realized it was an inappropriate time to ask it. By analogy: These days physicists spend a lot of time thinking about the big central unify everything theory, and that's great. In 1700, that would have been the wrong question to ask- there were too many phenomenons that we didn't understand yet (energy, EM, etc). We realized 20 years ago that we were chasing ephemera and not making real progress, and redeployed our resources in ways to understand what the problem really was. It's too bad this doesn't fit our SciFi timetable, all we can do is apologize. And PLEASE do not mention any of that "singularity" BS.

I know, I know, -1 flamebait. Go ahead.

*Note I didn't say it was a publicity stunt, just that it was no better than one. Stuart Shieber at Harvard wrote an excellent dismantling of the idea 20 years ago.

No, that doesn't even do it justice. (1)

jonadab (583620) | more than 2 years ago | (#40112217)

> akin to demanding an aircraft maker constructs
> a plane that is indistinguishable from a bird

On the contrary, a plane that's indistinguishable from a bird may be beyond today's technology, but if so it's only beyond our current technology in definable ways. Engineers who were working on such a problem would be able to break it down into subgoals and immediately start making measurable progress.

The Turing Test is more like demanding that aircraft makers design a plane that is larger on the inside than on the outside and can travel faster than the speed of light without using any fuel or reaction mass. *If* it's even theoretically possible, we would have to revise our current fundamental understanding of how things work rather substantially in order to even begin to have any idea at all how to get started working on the problem.

Why has the quest for real strong AI fallen by the wayside? Because we've learned a lot more about computers and what they can easily be made to do. We no longer think of a computer as a "giant electronic brain" that might somehow magically become self-aware if we just give it a database of words and program it to use subject-verb-object word order or some similar ridiculously simplistic approach. We've seen what happens when you send a paragraph of text through an online translation engine from English to Japanese and back to English, and we've come to understand that computers are not, in fact, anywhere near as smart as people.

Computers are great at memorizing and searching and sorting, but they absolutely suck at understanding what any of it means, and the top AI researchers in the world do not have ANY practical ideas about how to change that. If strong AI is possible at all, it requires a scientific breakthrough that will make general relativity look like small potatoes.

Re:No, that doesn't even do it justice. (1)

JDG1980 (2438906) | more than 2 years ago | (#40112611)

The Turing Test is more like demanding that aircraft makers design a plane that is larger on the inside than on the outside and can travel faster than the speed of light without using any fuel or reaction mass. *If* it's even theoretically possible, we would have to revise our current fundamental understanding of how things work rather substantially in order to even begin to have any idea at all how to get started working on the problem.

Unless you believe that the human brain has magical properties, it must be possible to simulate its operation. Your analogy fails.

Where's HAL9000 (3, Informative)

Anonymous Coward | more than 2 years ago | (#40112367)

He's here: https://twitter.com/HAL9000_ [twitter.com]

Indistinguishable from a bird? (1)

Fls'Zen (812215) | more than 2 years ago | (#40112479)

Aircraft makers that create stealth craft have to make them indistinguishable from a bird from the perspective of a radar. I recently watched a TED talk that featured an airplane that looked and flew like a bird. Is it too much to ask that AI designed to replicate humans do so effectively?

Symbol Grounding Problem (3, Interesting)

nbender (65292) | more than 2 years ago | (#40112499)

Old AI guy here (natural language processing in the late '80s).

The barrier to achieving strong AI is the Symbol Grounding Problem. In order to understand each other we humans draw on a huge amount of shared experience which is grounded in the physical world. Trying to model that knowledge is like pulling on the end of a huge ball of string - you keep getting more string the more you pull and ultimately there is no physical experience to anchor to. Doug Lenat has been trying to create a semantic net modelling human knowledge since my time in the AI field with what he now calls OpenCyc (www.opencyc.org). The reason that weak AI has had some success is that they are able to bound their problems and thus stop pulling on the string at some point.

See http://en.wikipedia.org/wiki/Symbol_grounding [wikipedia.org] .

OP is a troll (0)

Anonymous Coward | more than 2 years ago | (#40112503)

This question's been asked over and over again, and we're no closer to an answer than when it was asked the last time. Go read some research papers if you're interested in the current state of the field of AI.

Artificial Stupidity (2)

swm (171547) | more than 2 years ago | (#40112785)

Artificial Stupidity
http://www.salon.com/2003/02/26/loebner_part_one/ [salon.com]

Long, funny, and informative article on the history of the Loebner prize.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?