Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

When Will My Computer Understand Me?

timothy posted about a year ago | from the it-already-does dept.

AI 143

aarondubrow writes "For more than 50 years, linguists and computer scientists have tried to get computers to understand human language by programming semantics as software, with mixed results. Enabled by supercomputers at the Texas Advanced Computing Center, University of Texas researchers are using new methods to more accurately represent language so computers can interpret it. Recently, they were awarded a grant from DARPA to combine distributional representation of word meanings with Markov logic networks to better capture the human understanding of language."

cancel ×

143 comments

Sorry! There are no comments related to the filter you selected.

Why? (1, Offtopic)

AG the other (1169501) | about a year ago | (#43941929)

Should a computer understand us when we can't understand each other?

Re:Why? (1)

Stumbles (602007) | about a year ago | (#43941961)

Exactly.

It would be like a monkey fucking a football.

Re:Why? (4, Funny)

fuzzyfuzzyfungus (1223518) | about a year ago | (#43941969)

Actually, that would be considerably easier and cheaper to implement.

Re:Why? (2)

MatthewCCNA (1405885) | about a year ago | (#43942397)

It would make for better TV too.

Re:Why? (0)

Anonymous Coward | about a year ago | (#43942741)

It's already on Animal Planet.

Re:Why? (1)

The_Rook (136658) | about a year ago | (#43943007)

it's actually on youtube.

http://www.youtube.com/watch?v=6SgjQw0k-0w

Microsoft Hired People To Make Positive Comments (-1)

Anonymous Coward | about a year ago | (#43943981)

Representatives of Microsoft may be hanging out on the social news site voting up positive comments about the Xbox One, voting down negative comments and adding pro-Xbox comments of their own, Misty Silver says.

While at Microsoft for a meeting, Misty Silver saw and overheard some employees on Reddit. She looked at one of the employee’s screens:

“I noticed he was mass-downvoting a ton of posts and comments, and he kept switching to other tabs to make posts and comments of his own. I couldn’t make out exactly what he was posting, but I presumed he was doing RM (reputation management) and asked my boss about it later. According to my boss, MS have[sic] just brought in a huge sweep of SMM managers to handle reputation management for the Xbox One,” Silver reported.

“Reputation management” is the term social media marketers use to “pose as happy customers” on social media sites. They upvote/downvote and make comments.

http://au.businessinsider.com/microsoft-positive-reddit-comments-2013-6 [businessinsider.com] [businessinsider.com] [businessinsider.com]

Re:Why? (2)

riverat1 (1048260) | about a year ago | (#43942147)

I was going to say, your computer trying to understand you is probably like a man trying to understand a woman. Not likely to happen any time soon.

Re:Why? (1)

fyngyrz (762201) | about a year ago | (#43943193)

No, this is just about turning a set of sonic symbols into the equivalent text symbol(s). It's not about understanding in the sense you mean. It's not AI. It's a multi-d form of pattern recognition with contextual cues.

There's no AI yet. First of all, we don't know what I is. If it comes soon, it'll be an accident. Which is perfectly reasonable in terms of "could happen", but probably not likely.

Re:Why? (2)

riverat1 (1048260) | about a year ago | (#43944131)

Of course I know that, I was going for funny. I think it'll take getting computers a lot closer to the complexity of the human brain (and maybe something totally different than the current digital computers) before AI really starts working.

Re:Why? (1)

Anonymous Coward | about a year ago | (#43942189)

Should a computer understand us when we can't understand each other?

Why should a computer understand us when most of us don't understand a computer?

Re:Why? (0)

Anonymous Coward | about a year ago | (#43942289)

Someone mod Dave up.

Re:Why? (2)

Culture20 (968837) | about a year ago | (#43942339)

Dave's not here man!
You thought I was going to make a HAL quote, didn't you?

Re:Why? (1)

Anonymous Coward | about a year ago | (#43942999)

Dave's not here man!

You thought I was going to make a HAL quote, didn't you?

Does Howdy-Doody have wooden balls?

Probably that'll go to -1.

When Will My Computer Understand Me? (0)

Anonymous Coward | about a year ago | (#43942259)

Nobody understands you.

Misread the title... (1)

idunham (2852899) | about a year ago | (#43943921)

Saw this article and the one about PRISM, thought for a moment thaat it said:
"When Will My Government Understand Me?"

And no, Offtopic is not what this is.

Re:Misread the title... (1)

idunham (2852899) | about a year ago | (#43944005)

Argh. "this"== OP ("Why should a computer understand me when we can't understand each other?")

No (0)

Anonymous Coward | about a year ago | (#43941957)

Your computer is not your waifu.

Re:No (0)

Anonymous Coward | about a year ago | (#43942135)

But I'm pretty sure I'm kowaii enough for my bento box while I masturbate to animated movies featuring women and men getting raped by monsters with tentacles.

Japan! Kowaii!!! Kekekekeke ^.^

Re:No (0)

Anonymous Coward | about a year ago | (#43942327)

Things to ponder:

Is the parent misspelling kawaii?
Is kowai not a much better word to use here?

Deep.

Maybe.. (4, Insightful)

houbou (1097327) | about a year ago | (#43942001)

Instead of trying to build computers that can understand us, we should be building computers that can learn based on stimuli. If a computer can somehow see, and hear, at the very least and it could somehow capture this information and then over time, develop algorithms to make sense of these things. You know.. the code it would generate could then be used ... Anyways, sounds crazy, but, to me, it makes more sense that way. After all, we didn't just 'communicate' instantly, we learned over time.

Re:Maybe.. (2)

Dr. Tom (23206) | about a year ago | (#43942069)

Where do you think the probabilities for the Markov nets come from? They are learned from examples.

Re:Maybe.. (2)

stms (1132653) | about a year ago | (#43942149)

That is exactly what neural networks are attempting to do. It's just the first thing we're teaching these neural networks to understand is us, specifically language/writing. Which when you think about it is the most logical place to start. Humans have already been organizing information into writing for a millennium computers are already hooked up to the biggest archive in history (the internet). There's a lot of useful information for computers to start learning with and its probably the easiest way for us to learn how best to make them learn.

Re:Maybe.. (1)

houbou (1097327) | about a year ago | (#43942177)

I think it should be more generic.. instead of trying to understand us, it should try to just learn by experience.. At first, it's mostly about what they can see, hear, obviously 'read' although that is where the internet comes in with the vast amount of data. But in the end, communication isn't just communication, it's an experience. Maybe that's the way to do it for machines. Building something which can gain from experience or should I say, let it experience events and things and let it learn and dissect this information and from there, let it learn to express itself. Again, if that makes any sense.

Re:Maybe.. (1)

Tagged_84 (1144281) | about a year ago | (#43942511)

Again this is what a hierarchical hidden markov model does, it's the closest simulation of our own neuronal network. It learns by experience just like we do, just at a much faster pace.

Re:Maybe.. (1)

similar_name (1164087) | about a year ago | (#43942971)

Like this? [youtube.com]

Re:Maybe.. (1)

stms (1132653) | about a year ago | (#43943695)

Sorry I didn't make this very clear. What I was saying was neural networks are designed to learn just like humans. So they learn just a generally as humans. Language is just the most logical place to start teaching robots for numerous reasons.

Re:Maybe.. (0)

Anonymous Coward | about a year ago | (#43944175)

I get what you're saying (even though everyone else doesn't seem to), but it's rather difficult to explain. Despite what stms says, text is not the start. We want something alot more generic than that - something that can actually learn the text without being taught "this is 'a' ", "this is 'b' ", etc. I do have an idea on how to do this, but it works off certain assumptions about reality and is too abstract to explain in a short post on this site. ... But I'll give you a few details anyways.

For starters, the computer would have to be taught everything like a baby. Some teams of scientists are doing this, but they aren't doing it as basic as we want. In fact, they are still making the assumption that the world is 3D (and thus programming the computer's "brain" around that assumption), which limits the computer to only visualizing 3D concepts - and that's pretty limiting when we start discussing string theory and 12D. Still, the idea of having to teach a computer from bare principles like a child is a somewhat daunting task. For one thing, you have to teach it correctly. Another thing - you aren't sure how it's going to take it and process it because everything has become so complex.

More importantly - humans have a motivation for doing something. Depending on how you program this AI, it may do nothing but acquire information. Asking it a question does no good because it hasn't been programmed to respond. Furthermore, programming it to randomly try out a task isn't any good because then it's always "spitting static" so to speak.

Re:Maybe.. (1)

Stumbles (602007) | about a year ago | (#43942347)

There are lots of people with neural networks and about all they have learned in life is how to wipe their ass. But I guess it would be OK for a computer to learn that task. It will come in handy when I'm old and wearing adult diapers.

Re:Maybe.. (3, Interesting)

iggymanz (596061) | about a year ago | (#43943229)

I've watched the AI folk fart around with those things for over 25 years; they've nothing to show.

Even my preferred hobby of symbolic AI has gotten mostly nowhere in the last 30 years.

Let's just make certain animals smarter and call it a day. what could go wrong?

Yes! (0)

Anonymous Coward | about a year ago | (#43942293)

A computer scientist decades ago (can't remember his name) once said that a user should be able to sit behind a computer and start typing at the command line and it would figure out what needs to be done.

I have said that typing is so old fashioned. And what do I hear? "So you think programming computers is just typing!?"

Which is NOT what I said. The fact that we are still typing code in 2013 just seems so backwards to me. And I think it's a sign of how computer science has stagnated.

I mean really, what big breakthroughs in the last 20 years have there been in CS? Or in human computer interaction? We got faster chips ( thanks EEs!) cheaper chips (Thanks MBAs!) and commodity software (Thanks F/OSS!) - but where's the innovation in Software!? There's none.

Sure there has been some very small incremental things using eye movement, touch and even thought. BUT here we are in 2013, using mice and typing. Typing code.

Back in the early 90s, NExT has programming tools where one could drag and drop and create create programs. The closest thing today in 2013 is Microsoft Visual Studio. That's right - Microsoft - people!

Computer-human interaction is stuck in the 1970s.

There's no excuse.

Processing power and graphics has taken off. We now have touch screens and algorithms that can understand writing- pen writing..And yet, we're interacting with computers like it was 1979.

Re:Yes! (3, Insightful)

Anonymous Coward | about a year ago | (#43942565)

Sure, because

Computer, insert line... int line counter plus equals copy to tables bracket tables dot primary, comma tables uh, arrow thingy... last... comma sequelconne... no no, not that, erase last... ess que ell connection comma date helper bracket current date time bracket brack... uh, close bracket comma get cutoff bracket close bracket close bracket, semicolon.

Sounds so much easier than a keyboard and autocomplete.

Re:Maybe.. (0)

Anonymous Coward | about a year ago | (#43943069)

sounds like they already do that.................. camera, microphone, etc....

Re:Maybe.. (1)

DigiShaman (671371) | about a year ago | (#43943173)

Even if they did understand us at some level, would a computer care? I'm seriously asking this question because if the closest thing to a human brain is that of a monkey or ape. Yet we act and interact with the world in completely different ways. Even our desires and expectations are different. In fact, compared to a computer with advanced AI, we would have better luck trying to talk with dolphins. Whatever becomes of AI, it's not going to be HAL 9000. I'm pretty confident of that.

Re:Maybe.. (0)

Anonymous Coward | about a year ago | (#43943903)

They are trying to teach the computers to understand the wrong languages. They need to start with a gramatically simple spoken language and focus on understanding the basics. Over time the computer can be taught/learn more.

Even in everyday conversations, people have to ask clarifying questions or simply fail to understand what another person is asking. People often use one word when they mean another, forget what things are called, or simply cannot express what they are thinking. The average person can be pretty stupid sometimes. We have to let the computers be stupid so we or they can learn how to be not stupid. To that end, programers have to stop thinking like programers, and think like people.

Watch kids or animals and how they learn language. Specifically, they should study blind people who do not rely on visual clues to understand how they process language.

Voice is a crappy input mechanism (3, Insightful)

alen (225700) | about a year ago | (#43942009)

It was on Star Trek only because tv and movies are dialogue driven media. But in reality voice limits input

Take the Siri sports example
Ask for your team scores
Get scores
Open app for detailed sports news

Or just open the app and get the scores and news in one step. Same with any other data. Modern GUI's can present a lot more data faster than using voice to ask for the data

Re:Voice is a crappy input mechanism (5, Interesting)

fuzzyfuzzyfungus (1223518) | about a year ago | (#43942067)

But let's say, um, hypothetically and all, that a... ah... friend happened to have recordings of a few hundred million people's phone calls and needed a giant computer to be able to interpret them....

Re:Voice is a crappy input mechanism (2)

sideslash (1865434) | about a year ago | (#43942159)

Hey, smart pants, I want you to understand two things: it's an absolutely necessary tool to fight terrorism, and it didn't happen, so just forget about it.

On a different note, we are going to severely punish whoever leaked that PowerPoint presentation -- which for him/her is highly classified, but for you (once again) doesn't actually exist.

Re:Voice is a crappy input mechanism (0)

Anonymous Coward | about a year ago | (#43942233)

Is the Mars colony ready yet?

Re:Voice is a crappy input mechanism (1)

fyngyrz (762201) | about a year ago | (#43943111)

Is the Mars colony ready yet?

No, but there's that thing [freestateproject.org] going on in New Hampshire...

Re:Voice is a crappy input mechanism (0)

Anonymous Coward | about a year ago | (#43942631)

But let's say, um, hypothetically and all, that a... ah... friend happened to have recordings of a few hundred million people's phone calls and needed a giant computer to be able to interpret them....

The voice recognition part of that scenario is the carrier's executives recognizing the NSA's "give me all those fucking records, now!" command.

Re:Voice is a crappy input mechanism (4, Insightful)

R3d M3rcury (871886) | about a year ago | (#43942191)

Of course, that depends on what's going on.

(While wearing my bluetooth headset and working on my car)
"Siri, How'd the Patriots do?"
"They beat the Jets 52-10."
"Woohoo!"

Or stop working on my car, dig for my cellphone and either launch an app for sports scores (which I have to have on my phone) or launch Safari and search (ie, type) "Patriots Jets" and hope that Google is clever enough to figure out what I want and will put it on the search results.

I agree that if I want to know the details of the game--number of butt fumbles, interceptions, and what-not--I'm going for the App. But just to get quick answers, voice is far more convenient.

Re:Voice is a crappy input mechanism (4, Insightful)

The Good Reverend (84440) | about a year ago | (#43942255)

Totally depends what you're doing. I can tell Siri "Remind me to call my mom when I get home", and she does it. If I were to input this without voice, It would require me to open up menus to the reminder app, tell the system who I'd like to call, that I'd like a location-based reminder, and what that location is (though I'm not sure iOS can do this without Siri). Even if there were a macro for it, it wouldn't be any faster than asking Siri outright by voice.

There are absolutely things that are easier to do by hand, but voice certainly has advantages.

Re:Voice is a crappy input mechanism (0)

Anonymous Coward | about a year ago | (#43943271)

That, and video is a horrible output mechanism. I can go through tons more text in the same amount of time.

As far as input goes, ever look a source code? It seems as though that is the best input mechanism for computers today to get them to do what we want them to do.

Re:Voice is a crappy input mechanism (5, Insightful)

swillden (191260) | about a year ago | (#43943471)

But in reality voice limits input

Only if you have to talk to it like you're giving input to a computer.

Imagine instead that you're talking to a person, and not just any person, but a person who has the world's knowledge at his fingertips and knows you as well as a highly competent personal assistant. Rather than asking for your team scores, you'd say "Giants?" and you'd get back the most interesting points (to you) about the game. Follow that with "anything else?" and you'd get a rundown on the rest of the sports, focusing on the parts that most interest you.

Voice input with contextual awareness, understanding of the world, and personalization will blow away anything else in terms of input speed, accuracy and effectiveness.

Modern GUI's can present a lot more data faster than using voice to ask for the data

You're conflating two issues here. One is input, the other is output. Nothing is likely to ever be as efficient as voice for input. I'm a pretty fast typist and not a particularly fast speaker, but I talk a lot faster than I type, even on a nice full-sized keyboard. Output is a different issue. Text and imagery has much higher information bandwidth than voice. However, you can't always look at a screen, so being able to use voice output at those times is still very valuable.

Even now, I find my phone's voice input features to be extremely useful. Earlier today I was a little late picking up my son from karate. While driving, I told my phone "call origin martial arts". Now, I don't have an address book entry for Origin, in fact I've never called them before. But my phone googled it, determining that the intended "Origin Martial Arts" is the one near my home, and dialed the phone number for me. That's just the most recent example, but I use voice queries with my phone a half-dozen times per day because it's faster and easier than typing or because I'm doing something that doesn't permit me to manipulate the phone a lot.

Voice is the ultimate input mechanism for most humans. Right now it's pretty good (especially if you use Google's version of it; Siri is kind of lame), and it's going to get much, much better.

Re:Voice is a crappy input mechanism (1)

White Flame (1074973) | about a year ago | (#43944097)

The problem has nothing to do with voice. Even typing in free-form questions or even worse, trying to tell your computer to do something with just an English (or other natural language) command is still way off.

voice recognition will need to be a lot better (2)

Joe_Dragon (2206452) | about a year ago | (#43942045)

voice recognition will need to be a lot better

From The Department of Redundancy Department (0)

Anonymous Coward | about a year ago | (#43942313)

From The Department of Redundancy Department

Re:voice recognition will need to be a lot better (1)

Gertlex (722812) | about a year ago | (#43942477)

Considering how bad a lot of customer service phone bots are at understanding the word, "yes"... this!

Re:voice recognition will need to be a lot better (1)

peragrin (659227) | about a year ago | (#43943161)

My favorite is saying something properly and then coughing, gagging, etc and see what pops up in response. sometimes it can't figure it out but sometimes a fart will dial your work for you.

turn it around (1)

Anonymous Coward | about a year ago | (#43942093)

Instead of translating a human natural language to an interpretation in binary space, why not construct a conlang that sits in the middle. No expressions with double interpretation like in natural language, but also no command-line sentences that mimic for-loops and the like.

Take the best of 2 worlds.

Never happen (0)

Anonymous Coward | about a year ago | (#43942157)

Language is way to complex, especially when you factor in the amount of different accents out there, some heavy accented people sound like retards or sound like they've got throat cancer, and I wish I was joking but I'm not.

CIA's AI seem to understand me pretty well... (0)

Anonymous Coward | about a year ago | (#43942197)

What's wrong DARPA, did they not share this 40 year old tech??? Or are we meant to think those 'replicants' in battle scenarios are stupid???

Keep the BS coming...you're lucky propaganda is legal to distribute to US citizens these days, mind you, it didn't stop you from doing the last 40 years or more.

Your computer will understand you... (5, Funny)

msobkow (48369) | about a year ago | (#43942203)

... as soon as men understand women.

Re:Your computer will understand you... (2)

ElectricTurtle (1171201) | about a year ago | (#43942309)

This old chestnut? Really? My dad used to peddle this bullshit to me when I was kid, and I didn't buy it then either.

I understand my mother, my wife, my daughter, my female coworkers and friends as well as I understand all male analogues throughout humanity. Those men and women who are somewhat limited in their capacity to understand people shouldn't a) project those limits onto other men and women and b) perpetuate the bullshit that it's some inherent insurmountable gap between monolithic halves of humanity. Gender is not a monolith, and treating it as such leads to discriminatory indictments lobbed carelessly in both directions (I'm looking at you, feminists).

Re:Your computer will understand you... (0)

narcc (412956) | about a year ago | (#43942553)

This old chestnut? Really? My dad used to peddle this bullshit to me when I was kid, and I didn't buy it then either.

How long has that joke been going over your head?

Gender is not a monolith, and treating it as such leads to discriminatory indictments lobbed carelessly in both directions (I'm looking at you, feminists).

Oh, you're one of those people. Why am I not surprised?

Re:Your computer will understand you... (1)

ElectricTurtle (1171201) | about a year ago | (#43942935)

Oh wow, nothing but vague dismissal and no substance. Good job.

Re:Your computer will understand you... (1)

Anonymous Coward | about a year ago | (#43943085)

There's no substance in the GP's, but you haven't done any better in your original post. All you've done is express your opinion.

Re:Your computer will understand you... (1)

msobkow (48369) | about a year ago | (#43943765)

I doubt very much that you do.

People don't understand other people's thoughts. At best they have a mental map that roughly corresponds to the gist of what they're saying and which triggers a patterned response thought in their heads.

While what I said was very tongue-in-cheek, it's also true. Even couples who love each other spend a large part of their time essentially shrugging their shoulders and thinking "Whatever" while going along with the situation or demands in order to avoid an unnecessary fight or argument.

People are not logical in their communications. They're fragmented and riddled with assumptions about culture, phrasing, and slang. Even when they speak a "common" language, such as English, people from different countries often have difficulty with casual communications because the details of the language as spoken in their homelands is so different.

Getting a voice recognition system to deal with accents is far from trivial, but even that is trivial compared to getting a system to grasp concepts from around the world.

Admit it: sometimes you don't even understand yourself, and wonder what triggered that random/perverse thought that just flashed by.

Re:Your computer will understand you... (2)

femtobyte (710429) | about a year ago | (#43944083)

I doubt very much that you do.

What does the general lack of understanding-the-other have to do specifically with women? The grandparent poster claimed to understand the women in his life as well as their "male analogues," not to have any superhuman telepathic ability. Yes, understanding other people is hard --- but not on account of their particular genitalia.

Re:Your computer will understand you... (0)

Anonymous Coward | about a year ago | (#43943207)

They bleed once a month and still manage to live. When pregnant, they carry the baby to full term for 9 months prior to giving birth to something the size of a small watermelon. A year or so later, they want another child. What is there to understand?! Women are just fucking bat-shit crazy psychopaths!

Re:Your computer will understand you... (1)

fabio67 (1372097) | about a year ago | (#43944133)

if I want someone to undestand me, I buy a dog

Backwards to the Future (4, Interesting)

Colonel Korn (1258968) | about a year ago | (#43942209)

I'm struck by how much more accurate and responsive Dragon Naturally Speaking was in 1999 on my Pentium 2 than is Siri on my iPhone 5 and Apple's cloud servers today. Maybe it's a microphone problem, but in that case why was the $4.99 tiny microphone from Radioshack in 1999 better than the microphone in my iPhone 5 today?

Re:Backwards to the Future (1)

just_common_sense (2485226) | about a year ago | (#43943103)

It's amazing how age can affect a person's voice. ;-)

Re:Backwards to the Future (0)

Anonymous Coward | about a year ago | (#43943611)

The problem is a little company called "Vlingo", that was purchased by the Naturally Speakihng company and led them down a delusional "if we just record enough people and analyze it, we'll be able to interpret *everything*". The result is that with their huge recorded voice database, recorded by a bunch of crappy digitial microphones and cell phones, they've actually *distorted* and smeard the database of human speech for analysis.

Of rouces, they've cashed in their stock options and run for the hills since they got bought......

Re:Backwards to the Future (1)

Tastecicles (1153671) | about a year ago | (#43943705)

I use Dragon 10 (same era?), and am continually amazed by how accurately it transcribes my voice (Midlands English, of all dialects!). I use it on a regular basis to dictate documents and to voice-write* recorded audio.

*also known as "parroting", this is simply using DNS or other speech recognition software trained to your voice, a decent mike for recording (the best ones are the headset ones that settle the microphone close to the mouth) and headphones so you can hear the original audio. You repeat the audio as it comes (you can set the speed to whatever you like, slow it down for fast speakers, speed it up for slow speakers, jog back, whatever) and the software transcribes what you say without you having to retrain it using a limited sample of someone else's voice.

Re:Backwards to the Future (1)

Anonymous Coward | about a year ago | (#43944261)

I've heard tell that the folks who work on the Android voice recognition software (VRS) tried to use Dragon's software as the base for Android's VRS. Dragon is *really* *really* good when it's used with a good-to-great microphone in an environment with little-to-no background noise. When you use Dragon with the mics that you find in the typical cell phone, in the typical environments that cell phones are used, Dragon's software performs very poorly.

Try it yourself- get a cell-phone headset, put Dragon on a laptop, sit beside a busy street and dictate. :)

My computer already understands me (2)

Jason Earl (1894) | about a year ago | (#43942219)

M-x doctor

Re:My computer already understands me (1)

hcs_$reboot (1536101) | about a year ago | (#43942395)

hint [emacswiki.org]

Re:My computer already understands me (1)

Anonymous Coward | about a year ago | (#43943919)

If you needed a hint to get this joke, you should be looking for the "M-x pediatrician" command.

Re:My computer already understands me (1)

hcs_$reboot (1536101) | about a year ago | (#43944181)

Well, you should sometimes "M-x quit" and... see the world

Re:My computer already understands me (1)

Anonymous Coward | about a year ago | (#43942907)

Yeah, but when will my computer have a text editor?

When you start making sense (3, Funny)

Kjella (173770) | about a year ago | (#43942251)

Other people don't understand WTF you're talking about either, they're just better at faking it.

Language does not exist in a vaccuum (3, Interesting)

mugnyte (203225) | about a year ago | (#43942279)

Each time I've researched NLP solutions, the full sensory experience is ultimately found to play a role in full context and meaning. This begins in a very tight locale, and expands outward, or hopping around locations/time as part of context.

Instead, when most solutions attempt to pick a "general corpus" of a language, they pick such a general version of the language that contextual associations are difficult to follow for any conversation. Even the most ubiquitous vocabulary, such as in national broadcast news, there are assumptions that point all the way back to simplistic models of our experiences via sight/hearing, taste/smell, touch/movement and planning/expectation. Even in our best attempts, nothing such as metaphor or allusion is followed well, and only the most robotic - formal - language understood. This interaction is certainly nothing "natural".

I don't believe NLP problems will be (as easily) solved until we begin to solve the "general stimulus" for input, storage, searching and recall across the senses that humans have - their true "natural" habitat that language is describing. So that when apple goes from "round" to "red" to "about 4in" to "computer" to "beatles" to "not yet in season here" to "sometimes bitter" to "my favorite of grandma's pies", etc - and onward, like potential quantum states until the rest of the conversation collapses most of them - we may be able to get a computer to really understand natural language. This isn't possible in just the manipulation of pieces of text and pointers.

Re:Language does not exist in a vaccuum (1)

mugnyte (203225) | about a year ago | (#43942377)

Having re-read the article, this "dictionary without a dictionary" is a frozen-in-time corpus, which won't be able to *converse* with people because it's built from written text, which is dramatically different. Now, if this body of statistical word association was tied into just the language of a single town, and everyone's spoken conversations in that town for the past 10 years, then it might be easier for those particular people to use this tool, but still far from using "natural" language.

When will it understand you? (3, Interesting)

Culture20 (968837) | about a year ago | (#43942291)

When computer scientist guys understand what it means to understand. Go read some epistemology books. You'll understand.

Re:When will it understand you? (1)

Anonymous Coward | about a year ago | (#43943939)

I read some, and now I really don't understand ... but at least now I know dozens of abstruse ways to say so.

Please repeat the question (1)

Anonymous Coward | about a year ago | (#43942301)

When will my computer understand me?

I am sorry, but I do not know when Mike's uterus will unhand you.

Re:Please repeat the question (1)

Tastecicles (1153671) | about a year ago | (#43943663)

Siri, is that you?

Thou shalt not make a machine (1, Redundant)

Spy Handler (822350) | about a year ago | (#43942325)

in the likeness of a man's mind. -Orange Catholic Bible

A man's mind? (1)

PPH (736903) | about a year ago | (#43943917)

A server full of porn. Mission accomplished.

ask your wife (-1)

Anonymous Coward | about a year ago | (#43942353)

...when will she?

I can answer that, Alex! (3, Insightful)

Okian Warrior (537106) | about a year ago | (#43942371)

When will your computer understand you? Not for awhile.

Speech recognition is a part of AI, to the extent that the computer understands what you're saying. Sure, programs like SIRI or ELIZA can put words together, but only so long as we can anticipate the form and context of the question. SIRI only knows about the things it has been programmed to do, which is (unfortunately) not nearly the amount we expect an intelligence to do.

AI has languished for about 60 years now, mostly because it is not a science. There is no formal definition of intelligence, and no roadmap for what to study. As a result, the field studies everything-and-the-kitchen-sink and says: "this is AI!".

Contrast with, for example, Complexity [wikipedia.org] : a straightforward definition drives a rich field of study, producing many interesting results.

In this particular misguided example, they are using Markov logic networks, even though the human brain does not make the Markov assumption [wikipedia.org] (*). We have no definition for intelligence, and the model they work on is demonstrably different from the only real-world example we know of. This may be interesting mathematical research, but it isn't about AI.

Not to worry - most AI research isn't really related to AI.

This is why your computer doesn't understand you, and won't for quite some time.

(*) Check out Priming [wikipedia.org] and note that psychologists have measured priming effects three days (!) after the initial stimulus.

Re:I can answer that, Alex! (1)

Black Parrot (19622) | about a year ago | (#43942427)

AI has languished for about 60 years now, mostly because it is not a science. There is no formal definition of intelligence, and no roadmap for what to study. As a result, the field studies everything-and-the-kitchen-sink and says: "this is AI!".

You're assuming that AI is supposed to mean something like HAL 9000. The overwhelming majority of AI researchers are just trying to figure out good ways to solve much smaller problems. A tiny minority are trying to model some behavioral or cognitive phenomenon. Only cranks and con artists are trying to make something like HAL 9000.

Some things AI researchers have been doing are being adopted for commerce and industry. And that appears to be accelerating.

Re:I can answer that, Alex! (4, Interesting)

narcc (412956) | about a year ago | (#43942641)

Yes, computationalism is long dead. Now, can we stop using the term AI? Keeping the term around serves only to further confuse the general public and decision-makers both public and private. I'd go as far as to say that the continued misuse of the term is precisely what has kept the cranks and con artists in business!

Power vs algorithm (2)

hcs_$reboot (1536101) | about a year ago | (#43942387)

Is the current "lack" of power of current computers an excuse for not being able to make a "clever" computer? In other words, is main the problem computer power or is it the design of algorithms that run on the computer (Power vs method)? Hard to say until someone realizes that clever computer, but the recent "history" of electronic devices would let me think the problem is the method (algorithm).

Re:Power vs algorithm (0)

Anonymous Coward | about a year ago | (#43943255)

The most powerful supercomputers currently in existence are probably more powerful (in sheer hardware terms) than a human brain. Some of these machines are, in fact, being used to study cognitive science and AI. So yeah, the algorithms are the bottleneck at this point.

The interesting thing is, as hardware power grows, you can do the same amount with a less cleverly designed algorithm. You gradually approach the point of essentially being able to bruteforce human-level intelligent behavior.

Re:Power vs algorithm (2)

hcs_$reboot (1536101) | about a year ago | (#43944029)

Interesting, indeed. The brain power comes mainly from a huge "3D true parallel network" which emulation with current technologies, simultaneous memory accesses etc... makes the current computers power somewhat relative...

As for "bruteforcing" the brain behavior, that's probably not that simple. For instance, the Travelling Salesman problem, while very hard to solve algorithmically, has at least a bruteforce solution easy to implement (while it'd take a long time - M years - to complete with even a rather small number of cities...). The brain? Do we even know how to model the "problem"? Then, bruteforce... That makes the TSP rather easy, in comparison...

Dear computer (1)

tehlinux (896034) | about a year ago | (#43942437)

When I say nothing is bothering me, it means something is actually bothering me.

Never. (0)

Anonymous Coward | about a year ago | (#43942441)

On two occasions I have been asked, "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

--Charles Babbage, Passages from the Life of a Philosopher

Your computer will never understand you, until you first learn how to communicate with it.

It may learn to correctly interpret your English language instructions in a manner approaching that of another human. But understanding it will not be.

Perhaps... (0)

Anonymous Coward | about a year ago | (#43942525)

Perhaps more cunning linguists are needed?

When Will My Computer Understand Me? (0)

Anonymous Coward | about a year ago | (#43942739)

When Will My Computer Understand Me?

They day after your wife does...

My computer already understands me. (0)

Anonymous Coward | about a year ago | (#43942743)

Rather than to wait for Italians to start conversing in English, just learn Italian. It's the same with computers.

Wrong approach (1)

jalvarez13 (1321457) | about a year ago | (#43943323)

I'll repeat what I said in a related thread:

"Larry Page's advisor at Stanford, Terry Winograd, wrote a book with Fernando Flores in 1987 titled Understanding Computers and Cognition.
It is a profound critique of the mental representation approach, based on biological and philosophical considerations. A must read for anybody interested in the AI field."

Language is not precise (1)

Darkness404 (1287218) | about a year ago | (#43943719)

Language is not precise and computers like precision. The same words can mean entirely different things depending on context, where the speaker is from, how they say the words, etc. Furthermore, language evolves at a very rapid rate, new words are created on a daily basis. We're used to language and the vagueness that it implies but that translates very poorly to computer logic and it will never be perfect because it relies on variables that the computer will never know.

Needs strong AI, i.e. not any time soon (1)

gweihir (88907) | about a year ago | (#43943823)

Currently, there are not even any convincing theories how strong AI could be implemented. Thatindicates this is >50 years inthe future, but alsocould be >1000 years or never. There may be fundamental physical limits on play here. All the people promising this based on NLP databases, Markov Modells, etc. are lying and usually know it.

The day your computer can have an imagination (1)

GoodNewsJimDotCom (2244874) | about a year ago | (#43943853)

I've theorized AI before back in like 2002. I figure Natural Language is straightforward if you describe it in a 3d imagination. My old page [goodnewsjim.com] I'm not really tempted to get into AI as a solo project though myself as it would be over a life time of coding, and there is no profit in it until you have it completed. What point is there in being intelligent, hard working and broke?

Relevant (0)

Anonymous Coward | about a year ago | (#43944193)

http://www.youtube.com/watch?v=Ak2Kc_KO_3A

Language is the problem (1)

RedHackTea (2779623) | about a year ago | (#43944317)

Switch to an easier to understand, universal language (both easier for computers and humans). Simple solution. Have everyone learn it. You're trying to put a rectangular prism into a round hole. Until we have a rectangular hole, use a cylinder...
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>