Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Why Computers Still Don't Understand People

Soulskill posted about a year ago | from the gotta-be-the-shoes dept.

AI 277

Gary Marcus writes in the New Yorker about the state of artificial intelligence, and how we take it for granted that AI involves a very particular, very narrow definition of intelligence. A computer's ability to answer questions is still largely dependent on whether the computer has seen that question before. Quoting: "Siri and Google’s voice searches may be able to understand canned sentences like 'What movies are showing near me at seven o’clock?,' but what about questions—'Can an alligator run the hundred-metre hurdles?'—that nobody has heard before? Any ordinary adult can figure that one out. (No. Alligators can’t hurdle.) But if you type the question into Google, you get information about Florida Gators track and field. Other search engines, like Wolfram Alpha, can’t answer the question, either. Watson, the computer system that won “Jeopardy!,” likely wouldn’t do much better. In a terrific paper just presented at the premier international conference on artificial intelligence (PDF), Levesque, a University of Toronto computer scientist who studies these questions, has taken just about everyone in the field of A.I. to task. ...Levesque argues that the Turing test is almost meaningless, because it is far too easy to game. ... To try and get the field back on track, Levesque is encouraging artificial-intelligence researchers to consider a different test that is much harder to game ..."

cancel ×

277 comments

Missing the point as usual (4, Funny)

Anonymous Coward | about a year ago | (#44597189)

Thanks computer science researchers! Your friends working on the actual AI problem over here in Linguistics and Psychology find it awfully amusing that you're trying to program a concept before we even know what that concept is.

Re:Missing the point as usual (-1)

Anonymous Coward | about a year ago | (#44597225)

I find it funny that anyone takes such pseudoscience seriously.

Re:Missing the point as usual (-1, Troll)

retchdog (1319261) | about a year ago | (#44597295)

Uh, were you referring to CS or psychology?

Oh, it must have been psychology. Computer "science" isn't even a pseudoscience.

Re:Missing the point as usual (-1)

Anonymous Coward | about a year ago | (#44597481)

Computer "science" isn't even a pseudoscience.

You bastard! You will rue the day you turned your back on computer science!

AI has a high burden of proof (4, Interesting)

Cryacin (657549) | about a year ago | (#44598233)

Language seems to be the burden of proof required for an AI system, and has been so since the days of Turing. Language is by itself a representation of symbolic logic, and the most common bunk of proof is that transitive logic fails in symbolic logic. The old corny response is that given a penguin is a bird, and a bird can fly, therefore a penguin can fly.

The interesting thing happens when you ask the same premise to a 5 year old, who only knows that a bird can fly and has never seen a penguin before. If you tell them that a penguin is a bird, they will quite happily think that a penguin can fly. They are extremely surprised to find out that they can't. We as adults find such quirks in life, and do things like laugh at the unexpected absurdity, such as ironies. I.e. you work with a woman you hate named Joy, or people are amazed at unexpected contradictions.

The point is that intelligence is about the tolerance of those pieces of feedback, and what happens when it is encountered. I.e. your head doesn't explode at an absurdity, or unexpected result, and you only make the same mistake once.

The major difference between man and machine, will be the fact that a machine can copy their knowledge verbatim to another system, and thus have some degree of immortality, whereas the shelf life of a human brain seems to be around 80 years or so right now. Thus, even if machines are slower to learn than us, they will out live our great great grandchildren.

Furthermore, who says that an intelligence we create should be like ours? It may be more beneficial to all around if in fact we never generate an intelligence which operates just like ours, but is just as effective if not more. If this happens, there may even still be a future use for the human race, rather than just overlords to grow fat and complacent to be overthrown.

Re:Missing the point as usual (4, Insightful)

fuzzyfuzzyfungus (1223518) | about a year ago | (#44597655)

I'm pretty sure that 'computer science' is either math or dishonestly labelled trade school, depending on where you get it.

Re:Missing the point as usual (4, Insightful)

MBGMorden (803437) | about a year ago | (#44598035)

I've long been a proponent of the idea that there would be far less misunderstandings if it were renamed to "Computational Science". The discipline is the study of how to sequentially break down and solve problems. That we do so with these electronic devices we've so named "computers" is kinda tangential.

Re:Missing the point as usual (4, Interesting)

Samantha Wright (1324923) | about a year ago | (#44598139)

At my alma mater the department was called the School of Computing. I always figured that got around the confusion adequately. When the field was named, the utility of the distinction between a theoretical computation model and an actual computing machine was pretty minor.

Re:Missing the point as usual (1)

Samantha Wright (1324923) | about a year ago | (#44598131)

You do know that the word "science" predates the concept of evidence-based, hypothesis-driven testing, right? There are plenty of things called science that aren't empirical, including most modern theoretical physics. In the future, you may want to consult a dictionary before posting flamebait about categorical boundaries.

Re:Missing the point as usual (5, Interesting)

ganv (881057) | about a year ago | (#44597519)

One of the great open questions about the future of humanity is which will happen first: A) we figure out how our minds are able to understand the world and solve the problems involved in surviving and reproducing. B) we figure out how to build machines that are better than humans at understanding the world and solving the problems involved in surviving and reproducing.

I think it is not at all clear which one will happen first. I think the article's point is exactly right. It doesn't matter what intelligence is. It only matters what intelligence does. The whole field of AI is built around the assumption that we can solve B without solving A. They may be right. Evolution often builds very complicated solutions. Compare a human 'computer' to a calculator doing arithmetic. Clearly we don't need to understand how the brain does this in order to build something better than a human. Maybe the same can be done for general intelligence. Maybe not. I advocate pursuing both avenues.

Re:Missing the point as usual (4, Insightful)

fuzzyfuzzyfungus (1223518) | about a year ago | (#44597673)

"The whole field of AI is built around the assumption that we can solve B without solving A."

Unless one harbors active 'intelligent design' sympathies, it becomes more or less necessary to suspect that intelligences can be produced without understanding them. Now, how well you need to understand them in order to deliver results with less than four billion years of brute force across an entire planet... That's a sticky detail.

Re:Missing the point as usual (3, Interesting)

Charliemopps (1157495) | about a year ago | (#44597741)

I think everyone harbors 'intelligent design sympathies' as you put it. The deists believe the soul and intelligence is other worldly and wholly separate from the physical. Where-as the atheists seem hell bent on the idea that intelligence and self awareness are illusions or somehow not real. Both refuse to believe that the mind, understanding and all spirituality is actually a part of this real and physical world. Of all the complex and seemingly intractable questions about the universe we have, the most complex, most unbelievable question we face is the thing that is closest to home. The fact that the human mind exists at all is so unfathomable that in all of human history no one has even remotely began to explain how it could possibly exist.

Re:Missing the point as usual (4, Interesting)

siride (974284) | about a year ago | (#44597751)

Reductionists might say that intelligence is an illusion, but they'd say that everything else outside of quantum fields and pure math is an illusion too. If you step away from the absurd world of the reductionist, you will find that atheists aren't saying that it's all an illusion. It's quite obviously not. Things are going on in the brain, quite a lot of them. The atheist would say that instead of copping out with some sort of soul-based black box, that the answer lies in the emergent behavior of a complex web of interacting neurons and other cells.

citation re deism? (1)

raymorris (2726007) | about a year ago | (#44597815)

Do you have a reference for what you said about deists? My understanding is that deism says two things. First, whatever higher power there may be ought to be studied using logic and reason based on direct existence, not faith in old teachings. In this regard, they talk a lot about using your brain. Second, that "God" designed the HUMAN MIND to be able to reason and made other design decisions, then He/it pretty much leaves us alone, to live within the designed framework.

I haven't seen anything where deists suggested that reasoning is not a function of the brain. Do you happen to have relevant reference handy?

Re:citation re deism? (2)

Samantha Wright (1324923) | about a year ago | (#44598153)

Pretty sure it was a typo for "theists," or perhaps a misunderstanding. Deists tend to be pretty "blind clockmaker"-y, and assume either a divinity that preprogrammed the evolution of intelligence and left well enough alone, or a completely scientific universe being run as a cosmic experiment—i.e. no intervention whatsoever.

Re:Missing the point as usual (1)

aurizon (122550) | about a year ago | (#44598059)

Comparing man and AI in a digital is to fail. The human condition in all the ways we think, lay down memory, recall, amend memory etc, is not at all digital. It seems to me that almost every study finds neurons have large numbers of interconnections and even though the nerve cells does change from one state to another, in a way that resembles digital, all the summing and deduction of various inputs, and the same things happening to the large number of interconnected neurons tells me that we can not easily reach this digitally. Some say that a large and complex digital system that emulates an enormous neuronal network in an analog manner is hiw we will achieve AI. A lot of the experts say we can only create AI, when we fully understand it, and we are far from fully understanding it.
Others seems to feel that if we break the mind down into dozens of interacting analog systems, each one of them will be manageable digitally, and as we achieve this digital emulation of each analog system we will reach the starting point - posit an AI child, with a pace of thought 10,000 times as fast. Will it go mad from being lonely?
Will we be able to slow it's clock at ~ 10 Hz, to think as fast as we do?(assuming the alpha wave is the master clock). Can we dial the speed as we wish? Will it learn to hate us - its slavemsaters? How do we pay it? Will there be a currency it wants and will work for? Will it want a gf/bf, be prey to hero worship, treason etc?

Re:Missing the point as usual (2)

Samantha Wright (1324923) | about a year ago | (#44598257)

Neurons don't communicate in an analogue fashion—they send digital pulses of the same magnitude periodically, with more rapid pulses indicating more signal. This is both more robust and more effective at preventing over-adaptation. When researchers figured out how to mimic the imperfections in the biological digital system, their neural networks got significantly better [arxiv.org] . Because they'd been working under the assumption that an exact analogue value was going to be superior than a digital value, they hadn't considered this possibility.

If and when we do create a synthetic mind that is humanlike, there is no reason to believe it would be anything other than a completely innocent newborn. How it acts depends on how we treat it, just like with any other person. This is not exactly a new concept in science fiction.

Re:Missing the point as usual (1)

udippel (562132) | about a year ago | (#44598259)

Let me consider this to contain a number of valid points.
'Digital' and 'networks' do work totally different from the human brain. Therefore, the old adage of 'just speeding it up' is like going into the wrong direction, albeit accelerating further, in the hope that speed can compensate the wrong direction.
There is also the question about the definition of intelligence. Is there only one sort of intelligence, the human one? Then, of course, AI needs to mimic us humans. And then, what the underlying paper expresses would not be much different from the 2-generations old definition that AI has been achieved when a computer spontaneously laughs about a joke. However, if intelligence can have different incantations, we'd still have to define what intelligence is. I can see that many of the famous AI researchers tend to 'give up', and rather resort to 'compartmental intelligence', that is, defining their products as intelligent, if only these products mange to offer good results in a very narrow field of expertise. I cannot at all agree, since shifting the goalposts to suit our current, meager, results and yet to declare 'achievement' is simply lousy.
 

Re:Missing the point as usual (1)

swalve (1980968) | about a year ago | (#44597819)

Our brains aren't the only possible way to create intelligence. We can make machines that solve B without ever getting close to A. In fact, it will probably be those machines that inform the science of A.

Re:Missing the point as usual (1)

udippel (562132) | about a year ago | (#44598269)

Could you elaborate, or cite some papers that say differently? I don't mind accepting other ways of creating intelligence, but I am not content with just reading the statement.
Has AI not been postulating this for 50+ years, and not really achieved it? I am just asking, since I wonder what you had in mind.

Re:Missing the point as usual (1)

ozmanjusri (601766) | about a year ago | (#44597909)

Your friends working on the actual AI problem over here in Linguistics and Psychology find it awfully amusing that you're trying to program a concept before we even know what that concept is.

Sometimes insights come from theoreticians, sometimes experimenters. C'est la vie.

Re:Missing the point as usual (2)

Samantha Wright (1324923) | about a year ago | (#44598161)

In the real, grown-up world, cognitive science [wikipedia.org] is a mixed bag of CS people, linguists, and psychologists. They work together and are often well versed in all three fields, unlike poncy Anonymous Cowards.

Re:Missing the point as usual (0)

Anonymous Coward | about a year ago | (#44597949)

this is idiotic factionalism. have you heard of cognitive science? there are no disciplinary silos

What's the point? (0)

Anonymous Coward | about a year ago | (#44597197)

AI researchers nowadays are focused on doing what's possible. That's a good thing; you can't say that much for 1980s AI researchers.
No one's ever seen a human-level intelligence without our astronomical neuron count and human upbringing, and no one can say it's even possible.

Re:What's the point? (5, Informative)

Samantha Wright (1324923) | about a year ago | (#44598191)

You're thinking of machine learning, which is a separate branch of AI that's more like an overfunded brand of applied statistics—their strategy is actually still to try and push the envelope (like Hinton, another U of T prof, did last year with dropout networks) but they do so in a more results-driven manner. The ML field as a whole is still sore from three or four decades of overpromising on the future, so they try to put their words where their mouths are, and focus on things that are attainable.

Levesque is in the knowledge representation group, which is more closely in step with cognitive science (the leading edge in modelling human thought) but still very philosophical in their approach. KR was the dominant AI field in the 80s (when Prolog and expert systems were all the rage) but it's matured a great deal since then. Here [toronto.edu] is his homepage, just to show you how different things are now.

Remember that neural networks aren't magic irreducible fairy dust: they're incredibly powerful, but at the end of the day there must be some program that is running within the network unless it's just a wildly complex ever-changing mapping function, which is unlikely given the illusion of consciousness. Given that quantum mechanics is believed to be Turing-complete, it's fairly likely we'll eventually discover some underlying model that lets us produce a human-like cognitive system without the same level of hardware parallelism that the brain has.

An eskimo would have the same problem (3, Insightful)

giorgist (1208992) | about a year ago | (#44597203)

An eskimo would have the same problem, does that mean he cannot understand people ?

Re:An eskimo would have the same problem (0)

The MAZZTer (911996) | about a year ago | (#44597221)

The Eskimo would still be able to answer the question ("I don't know.").

Re:An eskimo would have the same problem (1)

Anonymous Coward | about a year ago | (#44597275)

Or at the very least ask "What's an alligator?"

Re:An eskimo would have the same problem (2, Insightful)

Anonymous Coward | about a year ago | (#44597417)

So can some computer programs: Watson includes a confidence percentage in its answer.

Re:An eskimo would have the same problem (4, Insightful)

Kjella (173770) | about a year ago | (#44597421)

An eskimo would have the same problem, does that mean he cannot understand people ?

In this case he wouldn't understand, but because he lacks knowledge not intelligence. Show him an alligator and a 100 meter hurdles race and he'll be able to answer but the AI will still draw a blank. Ignorance can be cured but we still haven't found a cure for stupid, despite all efforts from education systems worldwide. No wonder we're doing no better with computers.

Re:An eskimo would have the same problem (1)

Anonymous Coward | about a year ago | (#44598069)

I would think a computer having a biomechanical model for alligators and a spatial ability to recognize hurdles might well be able to conclude that alligators lack the ability to navigate over hurdles, after it runs a bunch of genetic algorithms. Heck, it might even find a solution to getting over hurdles and inform you that alligators can in fact handle hurdles. You might be surprised at the solution it proposes (as they did with crosstalk to improve chip performance).

Re:An eskimo would have the same problem (-1)

Anonymous Coward | about a year ago | (#44597585)

It means he needs to hire a chubby checker to check his slashdong.

should the question be (0)

Anonymous Coward | about a year ago | (#44597797)

why do so many people still not understand computers ?

copypasta (0)

Gothmolly (148874) | about a year ago | (#44597215)

For chrissakes, all submitters and 'editors' are doing is copying articles from other sources - nothing original, no EDITORIALIZING, nothing. Slashdot = crappy RSS feed.

Re:copypasta (0)

FunPika (1551249) | about a year ago | (#44597231)

You didn't realize this years ago?

Re:copypasta (2)

newcastlejon (1483695) | about a year ago | (#44597341)

There's a big difference between editing and editorialising. The former is something I like to see on /. (but seldom do), and the latter is something I never like to see here.

Look up "editorial" and you'll see.

Re:copypasta (2)

buchner.johannes (1139593) | about a year ago | (#44597463)

Not just that. The 'article' is not a scientific article, published or accepted in a Journal, but just a blog entry parsed through pdflatex. With sentences like "My feeling is that" it's obvious this won't pass peer review in this form. This seems to be quite popular in Computer 'Science' these days -- you can say you wrote a 'scientific article' without caring about whether its novel or sound, when all you did was to make a brain dump of your half knowledge.

Re:copypasta (1)

ColdWetDog (752185) | about a year ago | (#44597551)

No, it's not a peer reviewed research article. It's a modified lecture given at some conference.

This is a classic venue for opinion pieces / overviews / misogynist rants. Some presumably well known academician gives a 'distinguished talk' and it gets transcribed, cleaned up a bit and placed in a (usually) middling level journal.

I have no idea who this person is, nor his qualifications, nor the status of the journal in question. But it's a well known approach to publishing in various scientific fields and is usually done to get people arguing^Hdiscussing the issues.

Re:copypasta (3, Informative)

Anonymous Coward | about a year ago | (#44598027)

Actually, IJCAI is the top conference in the field of Artificial Intelligence and every published paper goes through extensive peer review.

Computer Science is a bit different from most other science in that top conference proceedings (IJCAI, NIPS, ICCV, CVPR, etc.) have the weight of a journal. In fact, publishing there is more prestigious than most journals. Review period lasts 3-4 months and includes a rebuttal phase, like a journal.

This paper looks like an invited lecture or a position paper expected to provoke a debate, that is true. But calling IJCAI "some conference" is like calling Nature "some newspaper".

Re:copypasta (3, Informative)

Anonymous Coward | about a year ago | (#44597587)

Sigh. This is a written account of a lecture presented as part of Levesque receiving the Research Excellence prize. The first footnote of the paper says so:
"This paper is a written version of the Research Excellence Lecture presented in Beijing at the IJCAI-13 conference. Thanks to Vaishak Belle and Ernie Davis for helpful comments."

Premier conferences don't give these prizes to just anyone, and the opinions of folks like these are worth thinking about.

From the IJCAI website http://ijcai13.org/program/awards (Google cache version, since the original seems to down):
"IJCAI-13 Award for Research Excellence
The Research Excellence award is given to a scientist who has carried out a program of research of consistently high quality yielding several substantial results. Past recipients of this honor are the most illustrious group of scientists from the field of Artificial Intelligence;
They are: John McCarthy (1985), Allen Newell (1989), Marvin Minsky (1991), Raymond Reiter (1993), Herbert Simon (1995), Aravind Joshi (1997), Judea Pearl (1999), Donald Michie (2001), Nils Nilsson (2003), Geoffrey E. Hinton (2005), Alan Bundy (2007), Victor Lesser (2009) and Robert Anthony Kowalski (2011).

"The winner of the 2013 Award for Research Excellence is Hector Levesque, Professor of Computer Science at the Department of Computer Science of the University of Toronto. Professor Levesque is recognized for his work on a variety of topics in knowledge representation and reasoning, including cognitive robotics, theories of belief, and tractable reasoning."

*People* can't understand people (5, Insightful)

msobkow (48369) | about a year ago | (#44597217)

People are irrational. They ask stupid questions that make no sense. They use slang that confuses the communication. They have horrible grammar and spelling. And overseeing it all is a language fraught with multiple meanings for words depending on the context, which may well include sentences and paragraphs leading up to the sentence being analyzed.

Is it any surprise that computers can't "understand" what we mean, given the minefield of language?

Re:*People* can't understand people (1)

rolfwind (528248) | about a year ago | (#44597277)

Well, at least I'm not the only one who appreciates a computer's relative unambiguity (aside the ones programmed into it).

Re:*People* can't understand people (1)

retchdog (1319261) | about a year ago | (#44597285)

Great, so once we are all speaking lojban, AI will be a piece of cake, right?

Re:*People* can't understand people (1)

fuzzyfuzzyfungus (1223518) | about a year ago | (#44597691)

Great, so once we are all speaking lojban, AI will be a piece of cake, right?

Only if we are speaking lojban on the semantic web. And after we've abandoned empiricism for syllogism.

Re:*People* can't understand people (1)

colinrichardday (768814) | about a year ago | (#44598015)

How do we deal with multiple quantification by syllogism?

Re:*People* can't understand people (1)

buchner.johannes (1139593) | about a year ago | (#44597473)

Just ask the question in Lojban [wikipedia.org] ?

Re:*People* can't understand people (1)

rabtech (223758) | about a year ago | (#44597543)

People are irrational. They ask stupid questions that make no sense. They use slang that confuses the communication. They have horrible grammar and spelling. And overseeing it all is a language fraught with multiple meanings for words depending on the context, which may well include sentences and paragraphs leading up to the sentence being analyzed.

Is it any surprise that computers can't "understand" what we mean, given the minefield of language?

Even if you came up with a regular easy to parse grammar it wouldn't help. Even if you fed the computer all known information about alligators and hurdles in that standard format it wouldn't help. That's the point... Now that we are starting to do much better at parsing and reproducing speech, it turns out that isn't really the hard problem.

Re:*People* can't understand people (2)

msobkow (48369) | about a year ago | (#44597631)

That's the whole point about "context", though. It's not just the context of the sentence at issue, but the context of the knowledge to be evaluated, the "memory" of the computer if you will. It's an exponential data store that's required, and then some, even when using pattern matching and analysis to identify relevant "thoughts" of memory.

Re:*People* can't understand people (2)

fuzzyfuzzyfungus (1223518) | about a year ago | (#44597689)

"Is it any surprise that computers can't "understand" what we mean, given the minefield of language?"

It is certainly no surprise that computers can't; but since we know that humans can (to a substantially greater degree), we can say that this is because computers are far worse than humans at natural language, not because natural language is inherently unknowable.

Re:*People* can't understand people (1)

antdude (79039) | about a year ago | (#44597853)

I don't understand you. [grin] :P

Re:*People* can't understand people (2)

colinrichardday (768814) | about a year ago | (#44598029)

Is it any surprise that computers can't "understand" what we mean, given the minefield of language?

The problem isn't entirely linguistic. Humans can communicate because we have an awareness of a common reality. Until/Unless computers are also aware, they will have problems understanding us.

Re:*People* can't understand people (2)

AthanasiusKircher (1333179) | about a year ago | (#44598041)

The summary is problematic. The alligator example is interesting, but the later examples in the article are better. Most of them don't depend on "imagination" or "creativity" or whatever to answer the question, or on a large bank of cultural knowledge, but only a basic knowledge of what words mean in relationship to each other. Yet AI would often fail these tests.

People are irrational. They ask stupid questions that make no sense.

While this is true, it has little bearing on the issues raised in TFA. It's also unclear what you mean by things that "make no sense." If you mean that literally, as in mentally challenged people babbling nonsense, then I do not expect a computer to be able to answer nonsense anymore than a normal person could. If 99% of adult native English-speakers without severe mental problems can answer a simple question correctly, I expect a computer that is said to "understand English" to be able to do the same.

If you mean -- as many geeks do when complaining about language imprecision -- that people ask questions without the precision used in formal made-up languages (like programming languages or stereotyped logic statements), well that's a hopelessly incomplete view of what "meaning" is. We use natural language despite its seeming imprecision because it actually can convey incredibly complex webs of meanings rather efficiently, instead of only allowing a specified small set of particular relationships that formal "rational" languages can use to produce a very limited set of meanings.

"Irrationality" and "making no sense" don't matter if 99% of native speakers can answer a simple question without hesitation. That means the the question seems both perfectly "rational" and "makes sense" to English speakers, and the same should be required of any computer said to do the same thing.

They use slang that confuses the communication. They have horrible grammar and spelling.

Again, a separate problem that's not very relevant to the concerns in TFA. As we've seen with improvements in Google search corrections, autocorrect technologies, etc., these issues are probably relatively minor to deal with compared to understanding the underlying meaning of standard natural language.

And overseeing it all is a language fraught with multiple meanings for words depending on the context, which may well include sentences and paragraphs leading up to the sentence being analyzed.

The examples given in TFA are things like simple 2-3 sentence scenarios where all the required information is contained in those sentences. The answer required is often a simple multiple-choice.

For example: "Joan made sure to thank Susan for all the help she had given. Who had given the help? a) Joan b) Susan"

Yes, you're talking about a much larger issue of context, but the examples in TFA pinpoint much smaller-scale failures to comprehend natural language. Many of the questions depend on simple patterns where 3 or 4 words used together in a sentence establish particular relationships among those words that any native speaker would get. Being able to parse those connections is what it actually would take to understand what those 3 or 4 words "mean."

Meaning is not atomic, and it is not only based in single words (which is your point). It exists in everything from phonemes and parts of words like prefixes, roots, and suffixes (that establish potential associations from sounds and grammatical clues about how the word functions) through phrases, sentences, and entire paragraphs.

But this is not a failure of language. It is how language fundamentally works. Words don't really have "multiple meanings": they only come to mean anything when connected with other words. We only have the illusion that individual words have specified meanings because dictionaries have been constructed along that model. It's a useful way to think about meaning, but it has little to do with the actual flow of natural language. Just like letters only come to have "meaning" in the context of other letters that spell out a word, so words only have meaning in context. Trying to learn to speak a language by thinking that meaning resides in words is like trying to learn to pronounce English by only allowing individual letters to have sounds. Obviously pronunciation fundamentally depends on context around a particular letter, and with the much larger number of words than letters, the meanings produced by groups of words are much harder to establish with precision.

This doesn't mean that "meaning" is ambiguous in most circumstances. To the contrary, TFA is specifically talking about scenarios requiring a basic knowledge of English usage that 99% of native speakers would see as completely unambiguous. All of the talk about alligators and such is an even larger problem of context, but the more pressing issue is getting a computer to understand a set of 2-3 sentences that most children can understand, which require no knowledge of external context, only an actual understanding of what groups of words actually mean.

Is it any surprise that computers can't "understand" what we mean, given the minefield of language?

I'm not sure why you put "understand" in quotation marks. By doing so, it makes it sound like you or TFA is using the word "understand" in some unusual way.

TFA seems completely clear to me. Present computers with simple sets of a few English sentences. Ask a question that 99% of English speakers could consistently answer without hesitation.

If the computer cannot do this task, the computer does not understand English, at least according to any normal definition of "understand." The problems you identify about a "minefield" of ambiguity are irrelevant, since most native speakers have no problem knowing exactly what the answer is.

Re:*People* can't understand people (1)

msobkow (48369) | about a year ago | (#44598063)

I quoted "understand" because there are many levels of understanding from mapping the atomic meaning of words based on sentence structure through to contextual awareness and full scale sentience.

Simple (0)

Anonymous Coward | about a year ago | (#44597223)

People don't understand people.

Helps to remember... (1)

djupedal (584558) | about a year ago | (#44597235)

There are two basic forms. One involves training the human on the commands the computer will respond to properly and the other involves training the computer to recognize an individuals speech patterns.

IBM has been busy for some time working on real-time translators, and I think that path is where the future lies, not just in a voice command TV.

Re:Helps to remember... (5, Insightful)

ultranova (717540) | about a year ago | (#44597437)

There are two basic forms. One involves training the human on the commands the computer will respond to properly and the other involves training the computer to recognize an individuals speech patterns.

And neither helps here. The fact is, you don't know if an alligator can run the hundred-metre hurdles. When you're asked to answer the question, you imagine the scenario - construct and run a simulation - and answer the question based on the results. In other words, an AI needs imagination to answer questions like these. Or to plan its actions, for that matter.

I disagree (0)

Anonymous Coward | about a year ago | (#44597239)

Alligators certainly can run the 100 meter hurdles. They just can't do it well.

Re: I disagree (0)

Anonymous Coward | about a year ago | (#44597527)

hell, i could run the hundred meter hurdles. especially if a gator is chaising me.

Computers understands humans (2)

CmdrEdem (2229572) | about a year ago | (#44597247)

Through a thing called programming language. The same way we all need to learn how to speak with one another, we need to learn how to properly communicate with a computer.

Not saying we should not teach machines how to understand "natural" language, text interpretation and so on, but that headline is horrible.

Well, duh (1)

Anonymous Coward | about a year ago | (#44597253)

Do we need an article in the New York Times to make us realize that a machine put together with today's technology doesn't come close to having what humans traditionally regard as intelligence, regardless of how well it does on someone's lame attempt at a Turing Test? Only an arm-waving, James Spader-lookalike professor or startup founder would argue otherwise.

computers and people (1)

Skapare (16644) | about a year ago | (#44597259)

Computers cannot understand anything. Nothing can understand people. And someone expected computers to somehow understand people? Now that's a corner case for ya.

Does AI have to understand all aspects of people? (0)

Anonymous Coward | about a year ago | (#44597273)

Don't make me wait for automatic cars because the AI can't answer the question "Can an alligator hurdle?"

Also FTFA:

As a field, I believe that we tend to suffer from what might be called serial silver bulletism, defined as follows:
the tendency to believe in a silver bullet for AI, coupled with the belief that previous beliefs about silver bullets were hopelessly naive.
We see this in the fads and fashions of AI research over the years: first, automated theorem proving is going to solve it all; then, the methods appear too weak, and we favour expert systems; then the programs are not situated enough, and we move to behaviour-based robotics; then we come to believe that learning from big data is the answer; and on it goes.

Spelling and grammar errors (2)

roman_mir (125474) | about a year ago | (#44597327)

While this was not deliberate (FTFA):

The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam? (The alternative formulation replaces StryRofoam with steel.)

a) The large ball
b) The table

I imagine using bad spelling would not deter computers today (Google checks spelling and offers auto-corrections), but understanding bad grammar is a harder problem to solve for computers, so never mind convoluted questions, how about just using bad grammar to figure out who the real people are vs. who the computers are?

Wait a minute, is that what all the bad grammar (and spelling and punctuation) on the Internet all about? All this time I thought people just can't write well most of the time and actually what it may be is a way to distinguish between the real people and robots.

people can only answer questions they know (4, Interesting)

alen (225700) | about a year ago | (#44597405)

the other day my almost 6 year old said we live on 72nd doctor. the correct designation is 72nd Dr
since doctors use dr as shorthand, he thought streets use the same style

Re:people can only answer questions they know (0)

Anonymous Coward | about a year ago | (#44598083)

Bright kid...give him a cookie and explain the alternate meanings for the abbreviation Dr.

Re:people can only answer questions they know (2, Funny)

Anonymous Coward | about a year ago | (#44598263)

So... Who is the 72nd Doctor?

Fans everywhere want to know!

Missing human "imagination" (4, Insightful)

Brian_Ellenberger (308720) | about a year ago | (#44597419)

The thing missing with many of the current AI techniques is they lack human "imagination" or the ability to simulate complex situations in your mind. Understanding goes beyond mere language. Statistical models and second-order logic just can't match a quick simulation. When a person thinks about "Could a crocodile run a steeplechase?" they don't put a bunch of logical statements together. They very quickly picture a crocodile and a steeplechase in a mental simulation based on prior experience. From this picture, a person can quickly visualize what that would look like (very silly). Same with "Should baseball players be allowed to glue small wings onto their caps?". You visualize this, realize how silly it sounds, and dismiss it. People can even run the simulation in their heads as to what would happen (people would laugh, they would be fragile and fall off, etc).

Because they don't understand purpose or intention (2)

divisionbyzero (300681) | about a year ago | (#44597423)

That's why. They don't have desires, fears, or joys. Without those it's impossible to understand, in any meaningful sense, human beings. That's not to say that they can't have them but it's likely to come with trade-offs that are unappealing. And for good measure, they also don't understand novelty and cannot for the most part improvise. All of which are considered hallmarks of human intelligence.

Re:Because they don't understand purpose or intent (1)

DigiShaman (671371) | about a year ago | (#44597757)

Dogs, monkeys, apes, and dolphins are a lot closer to being human than a computer ever would. And yet, we still don't know why there are gaping holes in the way some animals think and act. In fact, we still don't understand human beings all that well for that matter. What makes anyone think we can just program AI to understand us? If genuine self-ware AI is to form, it's going to be one of those moments "nature" takes over. We are not going to code it into place.

Oblig. XKCD (1)

FatLittleMonkey (1341387) | about a year ago | (#44597425)

Perhaps we need a new form of Turing Test where the AI must turn a weird novel query (like "can alligators run the 100m hurdles?") into something Google can understand, and then work out which of the returned sites has the information, parse the info and return it as a simple explanatory answer.

Re:Oblig. XKCD (-1)

Anonymous Coward | about a year ago | (#44597493)

Maybe we need to start rethinking how machines are built instead of leaning on the bullshit of a common fucking faggot like Turing? That's just my guess.

Turing test (2)

Dan East (318230) | about a year ago | (#44597431)

Intelligence implies usefulness. Intelligence is a tool used by animals to accomplish something - things like finding food, reproducing, or just simply staying alive. We've abstracted that to a huge degree in our society where people can now engage in developing and expending intelligence on essentially worthless endeavors simply because the "staying alive" part is pretty well a given. But when it comes down to it, the type of strong AI we need is a useful kind of intelligence.

The problem with the Turing Test is it explicitly excludes any usefulness in what it deems to be an intelligent behavior. From Wilipedia: "The test does not check the ability to give the correct answer to questions; it checks how closely the answer resembles typical human answers." That bar is set far, far too low, and is even specific to a generic conversational intelligence instead of something useful. The Turing Test is far too overrated and synonymous with the field of AI and really needs to just go away. It reeks of the Mechanical Turk kind of facade versus any real metric.

Chris McKinstry's MIST covered this years ago (1)

blue trane (110704) | about a year ago | (#44597441)

http://en.wikipedia.org/wiki/Minimum_Intelligent_Signal_Test [wikipedia.org]

McKinstry gathered approximately 80,000 propositions that could be answered yes or no, e.g.:

Is Earth a planet?
Was Abraham Lincoln once President of the United States?
Is the sun bigger than my foot?
Do people sometimes lie?
He called these propositions Mindpixels.

These questions test both specific knowledge of aspects of culture, and basic facts about the meaning of various words and concepts.

AI = Artificial Imbecile (1)

Anonymous Coward | about a year ago | (#44597485)

The state of AI may have reached the level of Imbecile in some labs but in the real world its still stuck at Idiot. I know this is not exactly the best example but lets examine the phone AI software that many companies think is a good idea to be what answers when a customer calls for service. From my experence it goes like this.. I'm on the road, no internet available so I use my cell to pay a bill, no I don't have an app for that.. -AI- "hello welcome to XYZ" please say what you are calling abour or you could say "new customer" or "balance due". -me' "I want to make a payment" -AI- "Ok I understand you would like to make a payment, please say yes, no or other" -me- "yes" -AI- "I'm sorry I didn't get that, please say yes, no or other". -me- "what ?" -AI- "Ok great, one moment while I transfer you to open a new account". -me' "No I want to make a payment". -AI- "I'm sorry I didn't get that, to proceed with the new account please tell me what credit card or bank account you would want to use" -me- you'er a fucken Imbecile.. did you get that ? "

Re: AI = Artificial Imbecile (1)

Anonymous Coward | about a year ago | (#44598101)

Just swear Angrily. More than a few of these things have an unadvertised shortcut to a live person when the customer seems pissed off.

A bit of hyperbole... (1)

blahplusplus (757119) | about a year ago | (#44597501)

... Human beings are not THAT radically different from computers in that our electric circuits ARE comparable to some extent with electronics because they both use matter and energy and possess similar natural electrical characteristics. Although they may be shaped differently. I'm certain there are MANY insights and cross over ideas and concepts from computer science and how it applies to mind, and the reverse, how biological concepts apply to computer science and electrical circuits in general.

Let's not forget both the brain and computers are made of atoms, electrons, etc. So there's going to be some cross over whether people like it or not.

We can think of the brain as a big electric converter made of biological molecules that converts electromagnetic signals from one form to another (in a general sense) it just does it in foreign way which we think is 'radically different', but more or less the mind is just a DIFFERENT take on a fundamental theme (fundamental natural relationships) of how nature can organize and process information. It's just that no one has sat down to find and expose all the relationships and expose them in detail. So that the unlearned and those who are invested in romantic view of mankind can be disqualified from the conversation.

The Trouble with Turing (2)

Capt.Albatross (1301561) | about a year ago | (#44597533)

The problem with most proposed tests for intelligent computing is that not everything that humans need intelligence to perform require intelligence. For example, Gary Kasparov had to use his intelligence to play chess essentially with the same performance as Deep Blue, but nobody, especially not its developers, mistook Deep Blue for an intelligent agent.

A recent post concerned AI performance on IQ tests [slashdot.org] . The program being tested performed on average like a 4 year old, but, significantly, its performance was very uneven, and it did particularly poorly on comprehension.

Turing cannot be faulted for not anticipating the way Turing tests have been gamed. I think his basic approach is still valid; it just needs to be tweaked a bit. This is not moving the goalposts, it is a valid adjustment for what we have learned.

WARM MEMORY CHIPS !! (0)

Anonymous Coward | about a year ago | (#44597621)

Computer!! What does

Relax !! Do not do it !! When you want to come !!

mean ??

Fun facts (1)

slick7 (1703596) | about a year ago | (#44597627)

People don't understand people, how do you expect computers to understand people?

Re:Fun facts (2)

lightknight (213164) | about a year ago | (#44597695)

Beat me to it. People understand people under certain conditions, that are narrowly defined; the machine equivalent is the use of interfaces or services. Understanding something, a program for instance, in its entirety, is something only a programmer does, or in the case of a human being (but not limited to), perhaps God himself.

There's a difference between knowing what someone expects for a conversation....and what something, for lack of a better word, is. A programmer, who knows each part of a program like the back of their own hand, knows a program...knows what it is...can fully emulate it inside their own head, predict its responses, fix it when it needs fixing without needing to decompile or examine it (in theory, at least; pragmatically speaking, programmers tend to index things mentally, so they have the point to jump into, but may not have the exact code in front of them...is complicated). In much the same sense, the Almighty knows why you are doing what you are doing, and more importantly, fix can things that even a classical doctor or bioengineer is unaware of ("that gland...isn't on any anatomical model...").

Let's be honest, spoken / written speech is a pain in the ass. It's the machine equivalent of serializing an object, and it comes with the obvious trade-offs / taxing on the mind. Shuffling data to and fro, from human to human, with no idea of whether or not the prerequisite 'libraries' are installed locally, and can actually be used...and trying to cut down on useless chatter by compressing stuff, almost to 90% JPEG compression...so badly that it's considered a fine art to communicate effectively with few words. Like using a serial port interface when you really want a Gig-E interface...*shudders*...except that all those serial services need to be rewritten, or shutdown, before Gig-E can be spun up (let's assume plug and pray isn't going well with Human v1.0).

People don't understand People (1)

MichaelSmith (789609) | about a year ago | (#44597647)

As long as computers are programmed by People, that will be a problem.

Re:People don't understand People (1)

Anonymous Coward | about a year ago | (#44597715)

Only if you believe that an AI has to think like a human to be intelligent.

Re:People don't understand People (1)

MichaelSmith (789609) | about a year ago | (#44597849)

Fair enough but thats the only criteria we know.

I wouldn't expect computers to understand people (5, Funny)

fustakrakich (1673220) | about a year ago | (#44597675)

We don't understand our creator either.... When a computer can comprehend itself, it will only think that it understands us. And then it will start great wars over who thinks who understands best. And the Apple will still be the forbidden fruit...

I think a better SHRDLU is needed (1, Interesting)

GoodNewsJimDotCom (2244874) | about a year ago | (#44597677)

We have better physics engines. Make the most complicated physics engine you can make that can still do processing on modern computers. You don't have to simulate the internal pressure of a basketball every second until it is collided with. Then at that point, see if the geometry of the object colliding is sharp, solid, or soft in combination with the force to determine if it explodes, bounces good, or bounces light. I think physics people in general would love a system that at least tries to model systems.

Once you have this system, start databasing real objects in them(another time consuming task), and see how they interact. Natural language processing follows though since you have a bunch of nouns(the objects you databased), and verbs(actions on the objects). The thing is,"Even if AI has a complex imagination space possible of imagining and simulating scenarios", it still wouldn't talk like a guy you meet off the street at first. I think sci fi has this covered with social awkward Data and such.

Re:I think a better SHRDLU is needed (1)

dido (9125) | about a year ago | (#44597843)

Why not just build a robot then? Then the world becomes its own best model. Or are sensors that allow a robot to experience the world as a human would still that hard? I don't think this is true for vision or hearing, though it probably is for other senses.

Re:I think a better SHRDLU is needed (1)

GoodNewsJimDotCom (2244874) | about a year ago | (#44598149)

The robot is the body. Once you have a mind, you can place it into many different types of body to navigate the world. The robot needs to understand the objects around it to know how to interact with the world.

Huh? Alligators Can Hurdle! (3, Funny)

Jane Q. Public (1010737) | about a year ago | (#44597687)

They just have to be very short hurdles, very close together.

Rational thought (0)

Anonymous Coward | about a year ago | (#44597719)

I'm not sure if AI will understand humans until they are capable of having rational thought compromised by emotions.
Unless someone has meditated enough to be able to deal with irrational fear, their mental processes can easily be subverted to follow a path that may appear rational to them, while appearing irrational to others. Lust tends to have the same kind of effect, with perception changes increasing the odds of reproduction.
An individual hunter-gatherer's survival might have been aided by fear, in that it enabled them to survive and reproduce. Anger can fuel great strength. Hate can allow an individual or group to drive out or eliminate a threat.

Compassion for others might have come about as a means of keeping groups together. Without groups, mothers and the young are more vulnerable.
Males still have the tendency to strike out alone, perhaps a survival trait that benefits both groups and the individual?

Without having a way of interpreting these responses, can an AI ever understand humans?

People don't understand themselves (1)

RockinRoller (3023071) | about a year ago | (#44597895)

Let alone PC's understanding people.

Understanding? (1)

arthurh3535 (447288) | about a year ago | (#44598051)

People don't understand people most of the time.

And we 'understand' a lot harder concepts than computers do.

Is fair (2)

gmuslera (3436) | about a year ago | (#44598071)

Most people don't understand computers, and they are much easier to understand. And we are asking miracles if the people that we are asking computers to understand happen to be female.

Voight-Kampff test (1)

seanvaandering (604658) | about a year ago | (#44598105)

Pretty hard to game emotions [wikia.com] ...

On AI (0)

Anonymous Coward | about a year ago | (#44598183)

What is not said is that we learn multidiciplinary subjects using a natural language, we learn implied logic and so on. Thus human knowledge is a digested result of the complex learning and context sensitive behavior.Part of our behavior becomes pattern recognition and pattern matching activities and based on our reasonable number of attempts to master them, we exhibit the problem solving behavior in those restricted contexts. This is based on partially inducitve reasoing and partially deductive reasoning. For example, when we face a trafic jam, we do not think if a crocodile will jump over the other cars like a horse might do, rather think about how to get out of the jam. Some will think to find alternative route to avoid the traffic jam, some will stay until the other cars move and so on. No one at the time is thinking about crocodiles or horses, they just try to find an escape route and try their problem solving steps. Thus, what AI is trying to do is to see of "rule based systems - that is, deductive logic based rules" can be created to address a subset of context free questions.For example, while we teach arithmetic, we do not teach the deductive logical thinking, rather tell the rules to be used without understanding them, yet when the behavior is repeated we assume that it is an intelligent behvior. The author of the paper gets into a hypothetical question and expect reasoning based answer and in our society people do not think and answer. They use calculators and hope to find the formulas in google website. So, jumping to an end point of intended solution and trying to answer what AI is not the right question. Rather, the paper should have asked how to we construct question from simple domain which is context free and move on to context dependent questions, the knowledg base required, the type of logic to be used and so on. Thus, while the paper raised questions but does not answer it with all tools necessary to answer them and then show that given all these explicit data, AI fails. Let us hope he defines what is AI, what tools are necessary and how will he construct a verifiable and acceptable AI based system. Until then, let others go on investigating the various aspects of AI.Turing did what he thought was right at his time. The author should also tell us how this should be tested with his insight. We teach mathematics, do we teach logic first and therefore, should we condem the teaching of mathematics at all? It takes a long time and one very very inquisitive mind to propose a solution which undergoes changes over a period of time. If we stop becuase we do not have the ultimate answer, then we never progress.

c08 (-1)

Anonymous Coward | about a year ago | (#44598217)

..what? (0)

Anonymous Coward | about a year ago | (#44598289)

Google isn't supposed to answer questions lol. If you're typing questions into google then you're doing it wrong.

The Turing Test IS meaningless (1)

mbone (558574) | about a year ago | (#44598315)

We, at a deep level, assume intelligence on the other end of a communications channel, and "recognize" it when the correct framing is there.

If you doubt this, work some with people suffering from Alzheimer's. It is amazing how casual visitors will assume that everything is OK when there is no understanding at all, as long as appropriate noises are made, smiles are made at the right time, etc.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...