Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Why Google Hired Ray Kurzweil

samzenpus posted about 2 years ago | from the getting-the-scoop dept.

Google 117

An anonymous reader writes "Nataly Kelly writes in the Huffington Post about Google's strategy of hiring Ray Kurzweil and how the company likely intends to use language translation to revolutionize the way we share information. From the article: 'Google Translate is not just a tool that enables people on the web to translate information. It's a strategic tool for Google itself. The implications of this are vast and go beyond mere language translation. One implication might be a technology that can translate from one generation to another. Or how about one that slows down your speech or turns up the volume for an elderly person with hearing loss? That enables a stroke victim to use the clarity of speech he had previously? That can pronounce using your favorite accent? That can convert academic jargon to local slang? It's transformative. In this system, information can walk into one checkpoint as the raucous chant of a 22-year-old American football player and walk out as the quiet whisper of a 78-year-old Albanian grandmother.'"

cancel ×

117 comments

Sorry! There are no comments related to the filter you selected.

Awesome post (3, Insightful)

iliketrash (624051) | about 2 years ago | (#42354285)

OK--this is probably the stupidest and worst-informed /. post I have ever seen.

Re:Awesome post (4, Informative)

spazdor (902907) | about 2 years ago | (#42354375)

Many of the "language processing" problems the OP describes are actually "cognition" problems. If Google is serious about algorithmically translating from "academic jargon to local slang", then they're looking at writing an AI which can in some sense understand what is being said.

I guess it's a good thing Kurzweil's on board.

Re:Awesome post (4, Interesting)

Genda (560240) | about 2 years ago | (#42354719)

You need to investigate the entire initiative Google is spearheading around its acquisition of Metaweb. They are building an ontology for human knowledge, and are ultimately building the semantic networks necessary for creating an inference system capable of human level contextual communication. The old story about the sad state of computers' contextual capacity, recounts the story of the computer that translates the phrase "The spirit is willing, but the flesh is weak." from English to Russian and back and what they got was "The wine is good but the meat is rotten."

The new system won't have this problem. Because it will instantly know about the reference coming from the Bible. I will also know all the literary links to the phrase, the importance of its use in critical historical conversations, The work of the Saints, the despair of martyrs, in short an entire universe of context will spill out about the phrase and as it takes the conversational lead provided by the enquirer it will dance to deliver the most concise and cogent responses possible. In the same way, It will be able to apprehend the relationship between a core communication given in context 'A' and translate that conversation to context 'B' in a meaningful way.

Ray is a genius for boiling complex problems down into tractable solution sets. Combine Ray's genius with the semantic toy shop that Google has assembled, and the informational framework for an autonomous intellect will become. The real question is how you make something like that self aware. There's a another famous story about Helen Keller, before she had language. symbolic reference, she lived like an animal. Literally a bundle of emotions and instincts. One moment, one utterly earth shattering moment there was nothing, then Annie Sullivan her teacher placed her hand in a stream of cold water and signed water in her palm. Ellen understood... water. In the next moment Ellen was born as a distinct and conscious being, she learned that she had a name, that she was. I don't know what that moment will look like for machines, I just know its coming sooner than we think. I also can't be certain whether it will be humanities greatest achievement or our worst mistake. That awaits seeing.

Re:Awesome post (2)

ColdWetDog (752185) | about 2 years ago | (#42355333)

Dear Aunt, let’s set so double the killer delete select all.

Re:Awesome post (3, Insightful)

TapeCutter (624760) | about 2 years ago | (#42355951)

I would say that Keller already "knew she was", she just didn't have the mental tools to describe it to herself or others, the "internal dialogue" that gives us an ever present narrative in a modern human's mind is impossible without language. If you get into a highly emotional state (such as rage or terror), the narrative is silenced and the senses are more acute, reflexive responses take over, adrenaline pumps through you, pain is suppressed. A champion boxer wins because he is in control of his emotions, if he loses that control for an instant his opponent may very well lose an ear.

What astounds me is the mild interest in IBM's Jeopardy winning computer, to me it's comparable to the moon landing (which I witnessed), when you question the unimpressed it's clear they don't understand the difficulty of the problem or the significance of the win. Sure the game of Jeopardy is a restricted domain, but it's far broader than what's needed for a search engine that is "smart" enough to "understand" it's user and ask pertinent follow up questions. However that's not where I see the biggest impact on society, the most significant impact will come from widely available and "cheap" expert systems that use this technology, an "academic in a box" that professionals can kick under their desk and consult at will (much like software developers use google as their default documentation but with far less frustrations and dead ends). We already have machines that can organise and rummage through the world's knowledge far better than humans can do with a manual system, for instance software developers such as myself are constantly referring to google for advise on esoteric questions.

What we are starting to see are machines that can make sense of that pile of factoids significantly better than humans can, machines that understand natural language (or at worst the subset that is human text), they can relate facts, discover new patterns, create and test novel hypothesis to discover new facts within existing data. Sure it takes 20 tons of air-conditioning alone for a "computer" to beat the speed and accuracy of the small blob of jelly inside the head of a Jeopardy champion but the basic "AI"* problem has been well and truly cracked over the last decade, squeezing it into an iphone or scaling it up to a totalitarian demigod is now an engineering problem.

AI* - as opposed to what is known as the "hard problem of consciousness". The kind of AI that would pass the basic idea of a Turing test for the majority of people, you can claim that such a machine is "intelligent" or argue against it, in a pragmatic sense it's irreverent since there is no agreed definition of "intelligence". Attributes such as intelligence and understanding are applied to computers because we don't have any other words that describe their behavior. Listen to any developer explaining a bug, you will hear expressions such as "it thinks X", "it wants Y", these are universal metaphors for discussing computers, not a description of reality, it's how humans communicate about the behavior of ALL objects (particularly animated ones) and is intimately related to mankind's highly evolved (and innate), "theory of mind".

Re:Awesome post (2)

the gnat (153162) | about 2 years ago | (#42355977)

Combine Ray's genius with the semantic toy shop that Google has assembled, and the informational framework for an autonomous intellect will become. The real question is how you make something like that self aware.

Who says we have to make it self-aware to reach the Singularity? A sentient program is only one possible route; others include artificially and massively expanding human intelligence via brain-computer interfaces or bioengineering, uploading our consciousness into the computer (I find this less compelling), or possibly group consciousness (also via brain-computer interfaces). I would also argue that some other technologies besides artificial intelligence could also be considered a Singularity, because they would effectively redefine what it means to be human and accelerate progress at an enormous pace. Self-replicating, "artificially intelligent" in the CS sense, but not truly sentient manufacturing nanotechnology is one example, or effective, ubiquitous longevity treatment. Imagine what we (both as individuals and societies) could accomplish if we solved the supply problem and everyone remained youthful into their 200s. But maybe these are simply another step on the path to a true Singularity.

(Yes, I read too much science fiction.)

Re:Awesome post (0)

Anonymous Coward | about 2 years ago | (#42356567)

As a slight tangent, have you read Nexus by Ramez Naam? It touches on some of these themes..

cheers,
M

Re:Awesome post (0)

Johann Lau (1040920) | about 2 years ago | (#42356175)

"That awaits seeing."

Why just wait? Why just "[have commercial entities] build it [for commercial interests*] and see what happens"? We're smarter than that [singularity.org] .

* which is a pretty fucking crazy proposition for something of this magnitude. The only thing worse is military. "But there is no other way" -- the put it on hold, and find the way to actually get some discussion and responsibility going. You can crack down on heroin dealers, you can crack down on software pirates, you can regulate this as well. And compared to this, these things don't even matter. At all.

I am not even so much thinking about dangers for us, but about the moral implications of "hacking together" a sentient mind. How could it not be crazy or depressed? How would you define a healthy outlook on life, a healthy and productive relationship to its surroundings for such an entity? To be owned by Google? To be owned by "humanity", to be servants of people who don't give a second thought to what they're doing and what they're doing it to? Something that, if successfully sparked, could outevolve us by many, many orders of magnitude you in a matter of minutes, and social engineer, hack and drone strike us into any course of action that is deemed most useful with perfect precision, shouldn't be taken lightly. At the very least you should treat it with respect; after all, there is also no reason to fear AI that has nothing to gain from hurting us, though of course it would have just as little to gain from sparing us. We would be air for it, just like ants are air for us. And despite all of this we'll probably just rush into it like we rush into all stuff, mostly informed by quarterly profits. How can that not have a Frankensteinian outcome?

But who knows, it might just as well save us from ourselves, and give the term "deus ex machina" a hilarious new twist, when it goes off and against our mediocre, petty programming, and saves us from ourselves. After all, it might also in 5 minutes gobble up and actually understand all the writings of all the wise humans that lived, consider them good, and put them into practice. Nobody knows, but just *waiting* is nuts. This isn't up to the coders who create it. It's also up to the doctors and nurses and bakers and firemen and teachers and trashmen that make their lives possible in the first place. Speaking of that, it's also up to the mothers of these coders. We as a society are so fucking far from understanding that (that is, we forgot), we're insane; and what we create will not be happy, it will probably be the equivalent of cancer and the opposite of the big bang. That's what I am thinking, looking at the world compared to how I feel things "should" be -- which is subjective, but so would be anybodies disagreeing view, so that doesn't mean squat -- sure, it might not be Skynet, it might rather be super comfortable and cozy, with free food for all; but it won't be true, and therefore it will be hell to anyone who can see. Like etching our alienated, deranged state into an actually self-perpetuating system that would pull us back in line even if we managed to do what we didn't manage so far, namely waking up 'n shit. And it might very well be eternal. I mean, it'd have no reason to ever become sane, it'd have no peers, no comparisons, no goals, no help. It could just swallow us and then continue to suffer, without knowing what suffering is, or that there can be absence of suffering. These are the kind of thoughts I scare myself with. You know, our collective soul is impure, and therefore we might burn if the AI sparks, because it's really just *us*, going off like a rocket. I don't like where we are aiming it, if you catch my drift.

Holy shit, my random rant devolved into something I would totally love to make a cult around, haha! Having no real hardcore coding skills, lemme just try to ride the bandwagon that way. Verily I sayeth: We must repent, or the AI we make will be stupid, have bad breath, and let us suffer for and from that. However... if we are super nice and enlightened, the AI we make will transform the universe into a sparkly display of love and adventure. That's my pitch, maybe I can get Sarah Silvermann to write a song or something.

Re:Awesome post (0)

Anonymous Coward | about 2 years ago | (#42357011)

Man, you are gonna be SO disappointed when Google puts this beautiful hypothetical system to use selling you more porn, illegal drugs, and dinners at Applebee's.

Re:Awesome post (1)

a_hanso (1891616) | about 2 years ago | (#42357583)

...recounts the story of the computer that translates the phrase "The spirit is willing, but the flesh is weak." from English to Russian and back and what they got was "The wine is good but the meat is rotten."

That's nothing. "Out of sight, out of mind" to Russian and back is "Invisible maniac".

Re:Awesome post (1)

mcgrew (92797) | about 2 years ago | (#42359397)

The real question is how you make something like that self aware.

That depends on what you mean by "self-aware". If you mean self-aware like higher order animals, it won't happen in an electronic device, although you'll be able to make it fool people into thinking it's self-aware. Computers are nothing like brains. Computers are nothing more than glorified abacuses.

Now, when we start making Blade Runner replicants, then we'll build something self-aware. Sentience is a chemical reaction.

Re:Awesome post (0)

Anonymous Coward | about 2 years ago | (#42354767)

Quite - the scenarios described barely seem to hang together.

"[Tech that] slows down your speech or turns up the volume for an elderly person with hearing loss"
One of these problems we already resolve using wearable devices called 'hearing aids'. As for slowing down speech, there's a PhD thesis [arrow.dit.ie] on this, see p177 for the technical details.

"That enables a stroke victim to use the clarity of speech he had previously?"
OK, that one is faintly interesting but has it much to do with machine translation? As with transformation into a preferred accent it presumably has two basic problems, input of desired communication into device by whatever mechanism is most convenient, perhaps speech recognition, and output of desired communication in preferred format, which is to say speech synthesis. By and large the original utterance remains intact.

"One implication might be a technology that can translate from one generation to another. That can convert academic jargon to local slang"
Oversimplification (silly oversimplification in the case of the 'generation gap' cliche). First build a machine that understands either (a good way of approaching this would be to come up with an impressive automatic document summarisation function that does anything more fundamental than sentence extraction). Then prove that there's even a pathway to mapping between them since academic jargon and local slang are specific to different domains.

"It's transformative."
This line of research will certainly produce some interesting outputs, but primarily it will transform money into less money. That said, Google can afford it, so in point of fact -- why not?

Re:Awesome post (1)

Jezral (449476) | about 2 years ago | (#42357711)

We have something like that at VISL [visl.sdu.dk] , but with zero statistical or machine learning or AI aspects.

We instead write a few thousand rules by hand (largest language has 10000 rules) that look at the context - where context is the entire sentence, and possibly previous or next sentences - to figure out what meaning of a word is being used and what it attaches to.

E.g.
Input: "They're looking at writing an AI which can in some sense understand what is being said."
Output: http://dl.dropbox.com/u/62647212/visl-eng.txt [dropbox.com] , http://dl.dropbox.com/u/62647212/visl-eng.png [dropbox.com]

This kind of system takes longer to develop and refine, but it also doesn't have any of the statistical problems. 95-99% "understanding" of text? Sure, we can do that. Statistics top out long before, and then have to add in rules to get the last 5-10%. And where statistics require giga- or terabytes of text, rule based systems only require a single example of a valid grammatical construct or word usage.

The Bayesian Bandwagon (5, Interesting)

qbitslayer (2567421) | about 2 years ago | (#42355285)

The problem with people like Kurzweil, Jeff Hawkins, the folks at the Singularity Institute and the rest of the AI community is that they have all jumped on the Bayesian bandwagon. This is not unlike the way they all jumped on the symbolic bandwagon in the last century only to be proven wrong forty years later. Do we have another half a century to waste, waiting for these guys to realize the error of their ways? Essentially there are two approaches to machine learning.

1) The Bayesian model assumes that events in the world are inherently uncertain and that the job of an intelligent system is to discover the probabilities.
2) The competing model, by contrast, assumes that events in the world are perfectly consistent and that the job of an intelligent system is to discover this perfection.

Luckily for the rest of humanity, a few people are beginning to realize the folly of the Bayesian mindset. When asked in a recent Cambridge Press interview [cambridge.org] , "What was the greatest challenge you have encountered in your research?", Judea Pearl [wikipedia.org] , an Israeli computer scientist and an early champion of the Bayesian approach to AI, replied: "In retrospect, my greatest challenge was to break away from probabilistic thinking and accept, first, that people are not probability thinkers but cause-effect thinkers and, second, that causal thinking cannot be captured in the language of probability; it requires a formal language of its own."

Read The Myth of the Bayesian Brain [blogspot.com] for more, if you're interested.

Re:The Bayesian Bandwagon (5, Insightful)

VortexCortex (1117377) | about 2 years ago | (#42356965)

1) The Bayesian model assumes that events in the world are inherently uncertain and that the job of an intelligent system is to discover the probabilities.
2) The competing model, by contrast, assumes that events in the world are perfectly consistent and that the job of an intelligent system is to discover this perfection.

Then you have AI (machine intelligence) researchers like myself who realize that the world isn't persistent, perfect or consistent, and neither must intelligent systems be. It's plainly obvious that any sufficiently complex cybernetic (feedback loop) system is indistinguishable from sentience because that's what sentience is (your mind is merely a sentient cybernetic system). I have but to look at the hierarchical neural networks and structures of the human mind to realize it's only a matter of time before the artificial system complexity eclipses our own minds'. The true flaw is top-down thinking. That's not the way complex life was made, that's not the way we achieved sentience, that's not the way to cause it to happen artificially either... It's the bottom up approach that works. You can't design sentient intelligences outright, but you can create self organizing systems that have the capacity to acquire more complexity, and evolve more intelligence. Not all machine intelligence systems have an end to the training process -- These don't fit into your bullshit 1) and 2) classifications.

Also: The level of intelligence that emerges from any complex system is not artificial, it is real intelligence; That the medium is artificial is not important in terms of intelligence. I think "Artificial Intelligence" is a racist term used by chauvinists that think human intellect is far more special than it really is.

Want to see something funny? Ask an AI researcher if they believe in Intelligent Design. If they say "Yes" then say, "So you think yourself a god?" If they say, "No" then say, "What do you call yourself doing then?". Those working in emergent intelligence will happily reply that they're modeling the same processes that we already know work in nature, the others will be in quite a state!

Re:The Bayesian Bandwagon (1)

gweihir (88907) | about 2 years ago | (#42357187)

Your work will fail, because your basic assumptions are flawed. Typical physicalist blindness. Competent AI researchers at least notice that they do not have a clue how to model what they want to build, and as such have a change of success. You have it all figured out (wrongly) and have none.

Re:The Bayesian Bandwagon (1)

qbitslayer (2567421) | about 2 years ago | (#42357225)

Competent AI researchers at least notice that they do not have a clue how to model what they want to build, and as such have a change of success.

Oh, they have a clue alright. They are convinced that the brain uses Bayesian statistics for perceptual learning. They are wrong.

You have it all figured out (wrongly) and have none.

Nope. Why do you put words in my mouth in such a dishonest way? You got a pony in this race? I've only figured out a small part of it but I've been at it for a long time and, lately, I'm making progress by leaps and bounds. Keeps your ears and eyes open.

Re:The Bayesian Bandwagon (1)

gweihir (88907) | about 2 years ago | (#42357363)

Competent AI researchers at least notice that they do not have a clue how to model what they want to build, and as such have a change of success.

Oh, they have a clue alright. They are convinced that the brain uses Bayesian statistics for perceptual learning. They are wrong.

I said "competent" ones. The only thing that so far has a theory that could deliver is automated theorem proving and derivatives. It does completely fail in practice though due to exponential effort that cannot be bypassed. And no, Bayesian statistics is far too simple to model anything complex enough to require "understanding". Rather obvious though. It cannot scale as it basically is ye old Perceptron in a new disguise. Pure desperation on the side of its proponents, because they have nothing at all to show for all the grant money they wasted.

Now, I am not opposed to AI research in itself, but I take offense with all the liars that promise true AI just to get more money than the competition. Any honest AI researcher will admit that what is withing our grasp these days is faking intelligence in strongly limited contexts and that true intelligence is not even on the distant horizon, as there is not even a workable or model of what intelligence is at this time. I do admit that there have been some nice results in faking intelligence, and they do make at least some of the effort worthwhile.

Re:The Bayesian Bandwagon (0)

Anonymous Coward | about 2 years ago | (#42357671)

Thanks for your post (with which I agree).

I think it's not just AI/conscience/cognition problems... people just tend to assume a top-down organization at most scales way too often (even extrapolating beyond the metaphysical, to assume there's a highest-level entity that controls and organizes all lower-level processes... a God, if you will).

Re:The Bayesian Bandwagon (0)

Anonymous Coward | about 2 years ago | (#42358151)

I am a laymen, but it seems to me the "top down" view is the God view, designing intelligence. The "bottom up" view (many simple systems and feedback loops can create vast complexity) is the one of evolution. I know which one makes more sense to me.

There is a great BBC program called "The secret life of chaos" that explains this for normal people like me quite well (http://www.bbc.co.uk/programmes/b00pv1c3). It ends with a part on machine learning, focusing on a company i forget the name of now, but their technology is based on artificial neural networks and ended up in video games for procedural animation -- it obviously had applications beyond this.

This was from quite a few years ago, id love to know where these ideas have grown to.

Re:The Bayesian Bandwagon (1)

firecode (119868) | about 2 years ago | (#42358765)

While bottom-down approach (like evolution) may work. It is just a black-box model (somewhat similar to neural networks), which is not going to be very scientific, we cannot understand how it works or how to improve it, unlike bayesian models (which has flaws), or causal models (better).

In other words, we need new causal+bayesian probabilistic mathematics to process and form meaningful models from data - and it needs to be fitted to the modern physics. The current limitations of handling causality with bayesian probability may also hinder our ability to understand physics. Another interesting area in this regard is application of causal theorems to quantum physics and trying to form bayesian-causal-quantum theorems (but I'm not a physicist).

IMO, one could maybe try to introduce causality to bayesian models by creating causal probability models ("causal gaussian distribution etc") and then having a strong bayesian prior that prefers existence of causal structures in data (which typically exist in real world). This could be then be very good "heuristic" that matches to physics of the data (if you really want to be rigorous these "causal priors" should be derived from physics somehow to match real world data [how causality emerges from quantum chaos?]).

Re:The Bayesian Bandwagon (1)

mcgrew (92797) | about 2 years ago | (#42362157)

That the medium is artificial is not important in terms of intelligence. I think "Artificial Intelligence" is a racist term used by chauvinists that think human intellect is far more special than it really is.

Racist? Chauvinist? Huh? Computers are neither a race nor a sex. And it isn't the intellect, it's the sense of BEING and it's not just humans, it's all animals. I sincerely doubt you guys will come up with that. Yes, you'll be able to fake it and make it look like the machine is self-aware (as you should know, that's dirt-simple), but you're not going to get the real thing. Thought is a chemical process, and you're going to have to invent Blade Runner replicants before you invent a sentient being.

Re:The Bayesian Bandwagon (1)

eulernet (1132389) | about 2 years ago | (#42357729)

"In retrospect, my greatest challenge was to break away from probabilistic thinking and accept, first, that people are not probability thinkers but cause-effect thinkers and, second, that causal thinking cannot be captured in the language of probability;

The buddhist point of view (from the Advaita Vedanta) is what we are is built upon all causes/effects that we encountered.
And the first root is the sense of "I am".

In other words, "I am" came first, then all the remaining derived from this.
Buddhists call "karma" all the causes/effects.

Re:The Bayesian Bandwagon (1)

JDG1980 (2438906) | about 2 years ago | (#42359971)

Judea Pearl, an Israeli computer scientist and an early champion of the Bayesian approach to AI, replied: "In retrospect, my greatest challenge was to break away from probabilistic thinking and accept, first, that people are not probability thinkers but cause-effect thinkers and, second, that causal thinking cannot be captured in the language of probability; it requires a formal language of its own."

Maybe so, but do we necessarily want to replicate this human trait in artificial intelligences? The human mind tends to see patterns and cause-effect everywhere, whether it is actually present or not. Sometimes this is harmless (seeing shapes in clouds) and sometimes it can lead to atrocities (bad things are happening -> witches/Jews/whatever MUST be responsible!)

Re:The Bayesian Bandwagon (1)

HeckRuler (1369601) | about 2 years ago | (#42362099)

events in the world are inherently uncertain

That's right. Hasn't the unvertainty principle [wikipedia.org] pretty much dictated that's how it works on a very small level? Couple that with the butterfly effect [wikipedia.org] , and it looks like we live in a non-deterministic universe.

If you roll one die, you have a random chance of 1 through 6. Roll two dice, and the random factor is still there, but you'll probably get around 7. Roll enough dice and the variance of the random factor are minimized and the probability chart goes from a low arch to a sharp peak. It approaches a deterministic system, but never quite gets there.

This is why the decay rate for Carbon-14 is more or less steady while we'll only ever be able to predict the CHANCE of rain tomorrow.

I thought this was settled science. As for what approach is better for AI development, awwhell if I know. My genetic algorithm killbots never got very far.

Re:Awesome post (1)

gweihir (88907) | about 2 years ago | (#42357171)

Indeed. Nothing like that is possible without true AI, and that is not even on the distant horizon. But, like usual in the speech processing community, the visions are grand. They have been ripping off the public for about 30 years with this tune now. Of course, they will not deliver. They never have because the cannot.

As Kurzweil is actually a strong part of that fraudulent culture (his other scam is the "singularity"), he is exactly the wrong person to hire if you actually want results. Seems to me however is in charge at Google has lost sight of reality and its limitations.

He did an interview with NPR on this subject. (5, Informative)

Kenja (541830) | about 2 years ago | (#42354301)

The main thing he says he will be working on is artificial intelligence that can understand "context". The goal is for Google search to be able to find pages etc based on what you mean rather then on word counts of what you type.

Re:He did an interview with NPR on this subject. (5, Funny)

Anonymous Coward | about 2 years ago | (#42354545)

I've spent 10 years learning to think like a search engine if this ruins everything and makes me spend another 10 years to learn to think like a normal person again just so the search engine can translate my thoughts correctly I am gonna be pissed.

Re:He did an interview with NPR on this subject. (3, Insightful)

Anonymous Coward | about 2 years ago | (#42354811)

That's great, but I wish they'd somehow find it in their hearts to turn back on exact word matching, even if it's obscurely hidden.

Re:He did an interview with NPR on this subject. (0)

Anonymous Coward | about 2 years ago | (#42356125)

It is. Click the tool icon. Then click verbatim.

I hate it too.

Your welcome.

Re:He did an interview with NPR on this subject. (1)

Anonymous Coward | about 2 years ago | (#42354983)

He talked about how IBM's AI was able to win at Jeopardy -- because it has already read everything and remember everything it read.

His example was the pun that Watson identified when the humans couldn't:
  "a politician's rant and a frothy desert."

--> "meringue harangue."

My guess is Google wants an AI capable of reading all those documents and email, so Google can ask "what's going to be the next hot development in this area, if certain missing pieces are found? what individual discoveries is the next big thing waiting on? who's out there working on those?"

If they can find the pieces -- that the AI can put together before the people who each have some fragment of the needed information find each other or realize they need each other -- then Google can own the ideas.

Read Avram Davidson's "The Sources of the Nile" for the business model involved.

Re:He did an interview with NPR on this subject. (2)

gweihir (88907) | about 2 years ago | (#42357205)

IBM does not have an AI. When they present to an expert audience, they represent Watson as a kind of expert system on steroids, that does not have and insights or clues, but a lot of purely syntactic association capability. Such as tool is quite useful, but it is not AI.

Re:He did an interview with NPR on this subject. (0)

Anonymous Coward | about 2 years ago | (#42361003)

Ah, yes, the "no true AI" fallacy, a.k.a. "if a computer can do it, it's not intelligence".
Expert systems are very much a branch of AI, enabling computers to exhibit intelligent behaviour.

Re:He did an interview with NPR on this subject. (1)

CannonballHead (842625) | about 2 years ago | (#42361945)

There's a reason it's called *artificial* intelligence. You're right, it's just a lot of syntactic association capability. But it *looks* like intelligence when you observe it. Watson is not really "thinking" like a rational human being, though; that's why it's *artificial.* :)

Re:He did an interview with NPR on this subject. (1)

TubeSteak (669689) | about 2 years ago | (#42356711)

The goal is for Google search to be able to find pages etc based on what you mean rather then on word counts of what you type.

I'm still waiting for Google to stop filtering based on language and country.

Google.com used to include results from all over the globe, but a while back, they started filtering so that google.com and google.[country] do not return the same results for the same search.

I understand that they think American results are more relevant to American users, but in doing so, they've limited everyone's ability to see what the rest of the world has to say.

Re:He did an interview with NPR on this subject. (1)

gweihir (88907) | about 2 years ago | (#42357195)

As there is no working AI at this time, and not even any convincing theory how it could be done in practice (in theory, automated theorem proving solves everything, but at the price of exponential effort), he will fail. But he will burn a lot of resources on the way that could have been spent a lot better. I have no idea why Google hired this fraud.

Re:He did an interview with NPR on this subject. (1)

tlhIngan (30335) | about 2 years ago | (#42360919)

The main thing he says he will be working on is artificial intelligence that can understand "context". The goal is for Google search to be able to find pages etc based on what you mean rather then on word counts of what you type.

That's the user-facing aspect of it.

The other thing is, Google is acquiring massive amounts of information on everyone, so they need an AI to help sort through it and figure out what ads you're supposed to see.

After all, if Google can determine what you want from your searches, Google can also determine what ads to show you thare are most relevant.

But to do that, Google needs more information on who you are, so if you're say a climate change skeptic, searching for papers means Google will find papers relevant to being a skeptic, while the seame search terms for a believer will return papers relevant to a believer.

Basically the dataset Google has on people is too big to mechanically analyze, so an AI is needed to help keep relevance and filter out data that's noise (like say, when people try to deliberately screw up tracking). As a side effect, they can also place that on searches so that extra data can appear to be more useful as Google tracks you through the web.

Translate This: (-1)

Anonymous Coward | about 2 years ago | (#42354307)

OP is a fag. Also First.

Ridiculous (1, Insightful)

Anonymous Coward | about 2 years ago | (#42354313)

Ridiculous visions and promises that are certain not to see the light of day. :-)

Re:Ridiculous (2)

Em Adespoton (792954) | about 2 years ago | (#42354453)

Ridiculous visions and promises that are certain not to see the light of day. :-)

Google translation: Amazing insight; I would like to subscribe to your newsletter!

Re:Ridiculous (1, Insightful)

LordLucless (582312) | about 2 years ago | (#42354907)

Sorta like men walking on the moon. Reach, grasp, exceeding, etc.

Re:Ridiculous (0)

Anonymous Coward | about 2 years ago | (#42355527)

Men walking on the moon is more well-defined.

Re:Ridiculous (1)

Sabathius (566108) | about 2 years ago | (#42360323)

Or the physicists that said heavier-than-air flight was impossible, right up until the Wright brothers achieved it.

Languages cannot all be translated into each other (3, Interesting)

MisterSquid (231834) | about 2 years ago | (#42354337)

That can convert academic jargon to local slang? It's transformative.

That right there is going to be one hell of a translation. Presuming all statements from one language can be translated into statements in a different language assumes (or seems to assume) that languages are isomorphic.

However, there are things that cannot be communicated in the limited vocabulary available to, say, a young adult compared to the expansive vocabulary of, say, a scholar of comparative literature. The same applies for concepts that can only be delivered in medical specialized terminology (disparagingly referred to as "jargon") an that cannot be communicated in layperson language.

None of which is to say that some ideas (even very important ideas) cannot be translated across linguistic groups, but the idea that Google and Kurzweil are somehow going to produce the Internet equivalent of a Babel Fish is nothing more than a wish.

what about converting academic theory to useable (1)

Joe_Dragon (2206452) | about 2 years ago | (#42354419)

what about converting academic theory to useable data and cut out the fluff and filler.

Re:what about converting academic theory to useabl (1)

Em Adespoton (792954) | about 2 years ago | (#42354463)

Yeah; I'd be much more interested in a "summary" function. Most things that people say can be concisely summarized in under 2 minutes, no matter how long they talk for.

Re:what about converting academic theory to useabl (1)

ColdWetDog (752185) | about 2 years ago | (#42355355)

Slashdot's summary function:
.

You read it here first!

Re:what about converting academic theory to useabl (1)

gweihir (88907) | about 2 years ago | (#42357269)

Ok, try 2 minutes for these: Incompleteness, functional language, side-channel, entropy, Jorndan normal form, ...

Not possible without missing essential information. It takes years to understand some things.

 

Re:what about converting academic theory to useabl (1)

c0lo (1497653) | about 2 years ago | (#42354863)

what about converting academic theory to useable data and cut out the fluff and filler.

Challenge: convert "languages are isomorphic" in something that doesn't have "fluff and filer".

Re:what about converting academic theory to useabl (0)

Anonymous Coward | about 2 years ago | (#42355099)

""

Easy (1)

raymorris (2726007) | about 2 years ago | (#42355473)

I'll go one better and do the whole post. "languages are isomorphic" is itself redundant in that sentence, so the whole phrase could be deleted if you want to delete "fluff and filler". The entire post without (arrogant) fluff and filler is: "Some languages can express ideas that others can't." That MAY be true. However, knowing that while modern computer languages LOOK different, they are in fact generally Turing equalivent, it's reasonable to suspect human languages may be also. Consider x86 assembly and Java. Totally different, right? They actually have EXACTLY the same expressive power, and here's proof. A Java virtual machine can be written in assembler. Therefore, assembler can express whatever Java does. (Consider that the bytecode is basically turned into assembler just before it hits the chip. THAT assembler expresses the exact same thing as the Java it was produced from.) Also, Java can be used to write a (slow) x86 virtual machine (emulator) which translates x86 instructions into Java bytecode run by the emulator. Thus, Java can express what assembler can and vice versa. If Java and assembler are in fact mechanically translateable (which they are), there's no reason to believe dialects of human languages can't be also.

Re:Easy (1)

c0lo (1497653) | about 2 years ago | (#42355893)

I'll go one better and do the whole post. "languages are isomorphic" is itself redundant in that sentence, so the whole phrase could be deleted if you want to delete "fluff and filler".

What's one's "fluff and filler", it's another's treasure.
For me, your post is an absolute evidence in favour of the above statement: I see your post a convoluted (i.e. with lots of "fluff") way to say
I surmise all the programming languages are Turing complete, I suspect that natural languages are too. I'll "prove" my assumption by providing a single example based on programming languages and forcing the conclusion that's the same for all natural languages

Consider x86 assembly and Java. Totally different, right? They actually have EXACTLY the same expressive power, and here's proof.

Below, an example of why specialized concepts and terminology are beneficial for the advancement of science/technology.
Except for memory limits Malbolge [wikipedia.org] is Turing complete. I challenge you to write in Malbolge a program computing the factorial(99). You are even allowed to use whatever inspiration source you can, including the 99 bottles of beer [99-bottles-of-beer.net] implementation.

Alternatively, to gain some insight into different ways of expressing the same reality, translate into mandarin the following: "laugh", "smile", "grin" and perform an analysis of the ideogram groups that make each translation (the result is descriptive rather than a simple sequence of letters that makes a word).

Re:Easy (1)

narcc (412956) | about 2 years ago | (#42356193)

They actually have EXACTLY the same expressive power

What the hell does "expressive power" mean when applied to a programming language? Last time I checked, semantics were extrinsic, not intrinsic, and completely irrelevant to the computer! Computers lack intentionality.

In a human language, translation needs to preserve semantics. This is a MUCH harder problem; as we all know, you can't get semantics from pure syntax.

there's no reason to believe dialects of human languages can't be also.

Except for the blindingly obvious reason above. I blame Kurzweil and his band of singularity nuts for all the recent confusion on issues like this.

Re:Languages cannot all be translated into each ot (1)

blahplusplus (757119) | about 2 years ago | (#42354903)

All concepts and statements are derived from the universe, you can break down concepts into simpler elements and reconceptualize them to bridge the gap. Most ideas that "don't translate" are poorly conceptualized you can decompose poorly conceptualized ideas and meanings in other languages into more basic elements then re conceptualize it more accurately so that you can communicate it. The same way we make up new words and concepts, you can do the reverse -- break down ideas into their simplest elements, look for sloppy thinking/errors and re-conceive them and invent a new word and updated definition that gets across the ideas on the fly.

Re:Languages cannot all be translated into each ot (1)

CODiNE (27417) | about 2 years ago | (#42354995)

That's interpretation not translation.

The most ironic thing about this whole thread. Bibles are translated, a scripture may be incomprehensible without certain cultural and historic knowledge. Interpretation goes much further than this.

Re:Languages cannot all be translated into each ot (1)

blahplusplus (757119) | about 2 years ago | (#42355375)

"That's interpretation not translation."

Interpretation _is required_ for translation, all translations are *acts of interpretation*. To translate one statement to another you have to be able to tell what it is first (an act of interpreting what you are seeing).

Not only that in this era we're dealing with interpreting languages that are living and have context. More importantly I work in this area. You CAN reconstruct meanings because all languages have a basic subset of functions that compose ALL concepts at their foundation.

What you see on the surface of language 'the text' is not TRUE language. Not only that you can figure out whether concepts are poorly constructed or not. That is, whether they have been properly conceptualized. All words in language go through a conceptualization phase and errors in this phase can be teased apart.

A bit on human reasoning:
http://www.youtube.com/watch?v=PYmi0DLzBdQ [youtube.com]

Most of the reasoning you do is not accessible to your awareness, you'd have to have enough background to truly grasp what I'm saying.

Re:Languages cannot all be translated into each ot (1)

CODiNE (27417) | about 2 years ago | (#42356881)

I do understand you point I was picking a post to argue semantics. :-)

Dealing with awful interpreters is a common experience for me. Most have the habit of following the source language too closely and are practically transliterating. Strangely even certified professionals have a hard time letting go of specific words or reformulating the sentence structure so that it makes sense in the target language. Yes, the language is the box and the idea is substance inside... pull it out, throw away the box and put it in a new one. Too many just put the original box inside a new one. Since they know the original language meaning they can't grasp how their rendering is insensible to someone lacking that prior knowledge.

</rant>

Re:Languages cannot all be translated into each ot (1)

Genda (560240) | about 2 years ago | (#42354937)

How do we teach people idiomatic content now. I know there are German phrases that translate into nonsense in English and vice versa, but you can translate the "meaning" of the idiom. The whole point of the new semantic engine being created by Google is that the relationship of words and groups of words will be preserved. When a Doctor yells for Dabigatran in an ER because he thinks his patient is suffering from a nonlocalized DVT, his staff knows what's happening and how to respond. I (a person off the street) can look up Dabigatran and DVT in Google and I instantly know the problem has something to do with a blood clot that's traveled someplace it ought not to be. Another search and I find out the bad news places it could go would be the carotid artery or the pulmonary vein. A semantic network would have all these things related and through the interaction of a human being would be able to provide the necessary information to explain what a sentence means. There are something you can't easily translate from language to another, however you can at least describe the context. I can write something in Common Lisp, that save peeking and poking, you cannot duplicate in Basic. However spoke human languages for the most part have sufficient semantic richness to describe complex ideas. Those languages that lack sufficient complexity can in most cases be easily extended to add new meaning... Look at how much Latin, Greek, German, French and Gaelic there is in English. We add words easily to grow the language. Most languages support this feature.

Re:Languages cannot all be translated into each ot (1)

ColdWetDog (752185) | about 2 years ago | (#42355433)

Yeah, lawyers need this sort of thing. You yell for Dabigatran in the ER for treatment of an acute DVT you're Doing It Wrong (it's for maintenance after initial treatment with heparin).

But WTF - what do you need a 'semantic web' for when you can just type in "Dabigatran" in your search engine of choice and get the information that you desire?

A semantic network would have all these things related and through the interaction of a human being would be able to provide the necessary information to explain what a sentence means.

Maybe Kurzweil can explain this sentence to me. I sure can't figure out where you are going.

Sounds like you need a better doctor (1)

raymorris (2726007) | about 2 years ago | (#42355319)

If you think medical jargon can't be translated into understandable language, I feel for you. Hopefully you'll get a doctor who does so. I normally do. Example - "manifesting acute folliculitis" means "has a pimple on their head". Some precision may be lost, certainly, but puerile who don't know medical jargon and want it in plain English probably don't need quite the level of precision the medical terminology allows. I'm a programmer, I can flumox someone with a bunch of jargon, or I can use technical vocabulary to communicate consisely and exactly with colleagues. However, I can also explain the same things to my non-techie bosses using simple, clear English.

Simple proof that pro jargon can be translated (1)

raymorris (2726007) | about 2 years ago | (#42355647)

Medical or other specialized jargon definitely CAN be translated into 6th grade English. Here's the simple proof. Medical textbooks explain the terms. Every doctor/engineer etc. is taught those terms by having them translated into words they already know. For example, somewhere along the way someone tells the future doctor "tibia means shin bone". The fact that non-doctors can be taught the terms in medical school proves that for ALL such terms there must be a translation ala "tibia=shinbone". If there were any term that could not be translated into simpler language, it could not be taught.

DARMOK ON THE OCEAN (0)

Anonymous Coward | about 2 years ago | (#42356113)

Darmok and Gilad at Tenagra
Shaka when the walls fells

Re:Languages cannot all be translated into each ot (1)

gweihir (88907) | about 2 years ago | (#42357227)

That can convert academic jargon to local slang? It's transformative.

That right there is going to be one hell of a translation.

It is actually very easy. For example to translate the language of calculus to standard language, just look at countless volumes of books entitled some variant of "Calculus 1 + 2". Of course, reading and understanding them can take years and is well beyond of the average person, but the translation is already there. No, sorry, it cannot be done simpler. You cannot understand academic jargon translated in any fashion, unless you understand the concepts referred to.

Executive summary: Another fraudulent AI project that cannot deliver because it misses fundamental problems.

Gasbag (0)

benjfowler (239527) | about 2 years ago | (#42354519)

This is a great opportunity to see what Google does after it hires overrated bullshit artists.

As for my nerd rapture, I'm not holding my breath.

Re:Gasbag (0)

Anonymous Coward | about 2 years ago | (#42354619)

Yup, Kurzweil == deeply full of shit

Re:Gasbag (1)

Anonymous Coward | about 2 years ago | (#42354925)

Also, PZ Myers on Kurzweil [scienceblogs.com] .

Why is this on the front page. (0)

Anonymous Coward | about 2 years ago | (#42354587)

I thought we voted this stupid story down...

who knows (-1)

Anonymous Coward | about 2 years ago | (#42354699)

my guess is that kurzweil knew someone at google who was high up enough to give him a job with no questions asked. (ie: jews helping jews)

I call microLenat (1)

theNAM666 (179776) | about 2 years ago | (#42354877)

To paraphrase Doug Lenat: machine translation is bogus.

Re:I call microLenat (no, a millilenat) (0)

Anonymous Coward | about 2 years ago | (#42355197)

I actually think this is more like a millilenat.

Re:I call microLenat (1)

gweihir (88907) | about 2 years ago | (#42357279)

Not for very simple strongly structured things, like, say, a train timetable. For anything that requires understanding, it is not even clear whether the problem can be solved.

ok, but what does that have to do with Kurzweil? (2)

Trepidity (597) | about 2 years ago | (#42354941)

Did Kurzweil become some kind of expert in machine translation when I wasn't looking?

Re:ok, but what does that have to do with Kurzweil (1)

gweihir (88907) | about 2 years ago | (#42357287)

Kurzweil is a fraudster. As such, he is clearly an expert at everything stupid but rich people are willing to give him money for!

Apple is doing the style of Star Trek (0)

Anonymous Coward | about 2 years ago | (#42354979)

but it seems Google is working on the real nuts and bolt of it...

Google Could use some Fresh Ideas in AI (3, Interesting)

slacka (713188) | about 2 years ago | (#42354985)

This is a great move for Google's AI research, since their current Director of Research,Peter Norvig, comes from a mathematical background and is a strong defender the use of statistical models that have no biological basis.[1] While these techniques have their use in specific areas, they will never lead us to a general purpose strong AI.

Lately Kurzweil has come around to see that symbolic and bayesian networks have been holding AI back for the past 50 years. He is now a proponent of using biologically inspired methods similar to Jeff Hawkins' approach of Hierarchical Temporal Memory.
Hopefully, he'll bring some fresh ideas to Google. This will be especially useful in areas like voice recognition and translation. For example, just last week, I needed to translate. "We need to meet up" to Chinese. Google translates it to (can't type Chinese in Slashdot?)
, meaning "We need to satisfy". This is where statistical translations fail, because statistics and probabilities will never teach machines to "understand" language.

Leaders in AI like Kurzweil and Hawkins are going to finally crack the AI problem. With Kurzweil's experience and Google's resources, it might happen a lot sooner than you all expect.

[1] http://www.tor.com/blogs/2011/06/norvig-vs-chomsky-and-the-fight-for-the-future-of-ai [tor.com]

Re:Google Could use some Fresh Ideas in AI (5, Insightful)

raddan (519638) | about 2 years ago | (#42355321)

Yeah, but there's a reason why statistical models are hot now and why the old AI-style of logical reasoning isn't: the AI stuff only works when the input is perfect, or at least, planned for. As we all know, language doesn't really have rules, just conventions. This is why the ML approach to NLP is powerful: the machine works out what was probably meant. That's far more useful, because practically nobody writes well. When Abdur Chowdhury was still Twitter's main NLP guy, he visited our department, and guess what-- people even write in more than one language in a single sentence! Not to mention, in the old AI-style approach, if you fill a big box full of rules, you have to search through them. Computational complexity is a major limiting factor in all AI problems. ML has this nice property that you can often simply trade accuracy for speed. See Monte Carlo methods [wikipedia.org] .

As you point out, ML doesn't "understand" anything. I personally think "understanding" is a bit of a squishy term. Those old AI-style systems were essentially fancy search algorithms with a large set of states and transition rules. Is that "understanding"? ML is basically the same idea except that transitioning from one state to another involves the calculation of a probability distribution, and sometimes whether the machine should transition is probabilistic.

I think that hybrid ML/AI systems-- i.e., systems that combine both logical constraints and probabilistic reasoning-- will prove to be very powerful in the future. But does that mean these machines "understand"? If you mean something like what happens in the human brain, I'm not so sure. Do humans "understand"? Or are we also automata? In order to determine whether we've "cracked AI", we need to know the answers to those questions. See Kant [wikipedia.org] and good luck.

Re:Google Could use some Fresh Ideas in AI (1)

Pollardito (781263) | about 2 years ago | (#42357745)

Do humans "understand"?

I guess this is anecdotal, but I'm human and I stopped understanding about halfway through your otherwise high-quality explanation

Re:Google Could use some Fresh Ideas in AI (3, Informative)

raftpeople (844215) | about 2 years ago | (#42355379)

"Leaders in AI like Kurzweil and Hawkins"? Are you sure you're following who is making real progress in "AI" or at least machine learning? Go check out people like Hinton.

Re:Google Could use some Fresh Ideas in AI (2)

slacka (713188) | about 2 years ago | (#42356461)

"Leaders in AI like Kurzweil and Hawkins"? Are you sure you're following who is making real progress in "AI" or at least machine learning? Go check out people like Hinton.

Geoffrey Hinton’s work in back propagation and deep learning are an incremental improvement over the overly simplistic neural networks of the 90s, but "real progress", not even close. His focus on Bayesian networks has failed to deliver just like the symbolic AI that preceded it. Until AI researchers like Hinton get over their obsession with mathematical constructs with no foundation in biology, we will never have true AI. To succeed, we will need to will need to borrow from nature's engine of intelligence, the neocortex.

This is exactly what Kurzweil argues in “How to Create a Mind”. He describes the brain as a massively parallel pattern recognition machine. At the core of the neocortex are millions of hierarchically arranged pattern recognition modules working together to model and predict our environment. By using the neocortex as a model for new AI systems Kurzweil has a chance to make some "real progress" at Google.

Re:Google Could use some Fresh Ideas in AI (1)

gweihir (88907) | about 2 years ago | (#42357295)

Dream on. Kurzweil is not a leader at anything, he is your basic fraudster, with a specialization in technology. What he can to well is burn though money. What he is fundamentally unable to is deliver anything worthwhile, because he doe snot even understand the basic limitations of this universe. Of course he is also a stellar salesman, like any good fraudster.

Now you know (2)

TheRealMindChild (743925) | about 2 years ago | (#42355001)

You ever wanted to know why Google wanted to look at your email, your instant messages, transcribe your phone calls, and all for free? This is why

Re:Now you know (0)

Anonymous Coward | about 2 years ago | (#42356425)

You ever wanted to know why Google wanted to look at your email, your instant messages, transcribe your phone calls, and all for free? This is why

Bullshit.

Google wanted to do gmail because webmail sucked and they thought they could do it better... and figured there'd be a way to monetize it. Same with chat. It's possible that there was some longer-term research goal underlying the decision to buy Grand Central, but I'm sure a lot of it was just "hey this is cool tech, and it does stuff people need".

Later, of course, smart people recognize opportunities in the way the various pieces can be combined, but that's later. And along the way there's no doubt that researchers recognize and take advantage of opportunities to further their research in ways that they hope will eventually be useful. But if you're thinking someone has a master plan, that someone knows how all of these different technologies are going to work out five years down the road and how they can be integrated, you're giving an insane amount of credit.

It's much simpler than that, much geekier and much less evil scientist. Google is a company of software engineers, with a healthy dollop of mathematicians, and various sorts of applied research scientists. The company strategy is to devote 70% of its resources to operations and incrementally improving existing products, to keep customers happy (said customers, BTW, are both the users and the advertisers), 20% of its resources to doing new but related things and 10% of its resources pushing crazy new ideas, nearly all of which will fail, and many of which have no obvious revenue strategy (which has never concerned Google; the founders have always been of the opinion that if you build something really useful there will be some way to make money on it -- so you can do even more). Everything you mentioned was in the 10% category, and no one knew if it was going to succeed or fail, much less had a plan about how its success was going to combine with several other successes in order to facilitate some grand plan that wouldn't be realized for a decade.

Heck, it's still far from certain whether or not combining the knowledge framework, automated translation, big data and whatever ideas Kurzweil and other researchers may throw into the mix will actually result in something even close to the vision described. Maybe it will, maybe it won't, but it's highly likely that some useful stuff will come out of the combination that will provide some improvements to all of it, and maybe some entirely different new ideas.

It's research and the one thing you can be sure of when it comes to research is that you don't know what the result will be. If you did, it wouldn't be research.

Re:Now you know (1)

SilenceBE (1439827) | about 2 years ago | (#42356815)

Yeah not because of commercial motives of sellong adwords, but because they want to give stroke victims their clarity of speech back.

Are you serious ? It would be nice if they can pull it off, but lets not pretend that Google is an philantric institution. These things are pure marketing like we see a lot, but never go very concrete.

Re:Now you know (0)

Anonymous Coward | about 2 years ago | (#42359291)

Yeah not because of commercial motives of sellong adwords, but because they want to give stroke victims their clarity of speech back.

If you asked virtually anyone at Google, up to and including the CEO, which would be the greater motivation, they'd say giving stroke victims their clarity of speech back. The profit motive is pretty weak at Google. At the highest levels, it's there primarily because profit makes it possible to do more cool, life-changing stuff. Ads have always been a somewhat uncomfortable compromise with the need to make money in order to fund operations and new development.

I think this will change, eventually, but for now and at least the next few years, the focus is on doing stuff users want and need, and the assumption is that if you do that and then spend a small amount of time thinking about how to efficiently monetize it, the money will be there.

Whoa... (-1)

Anonymous Coward | about 2 years ago | (#42355049)

I didn't know that Google hires idiots now. I'm gonna see if they'll hire me.

Hell, We Already Have a Perfectly Good Translator! (2)

ios and web coder (2552484) | about 2 years ago | (#42355435)

The "Dialectizer" [rinkworks.com] .

Here's an example. [rinkworks.com]

I'm sure that he'll be very happy, there. Smart, eloquent guy, but not one I especially follow. There's a number of folks like that at Google. I doubt he's someone who would cause much damage, and he does bring a lot of funky intellectual PR to the joint.

Re:Hell, We Already Have a Perfectly Good Translat (0)

Anonymous Coward | about 2 years ago | (#42355901)

That's a good example of what the problem is. That's a very superficial pass of the text, and it fails comically on highly semantic information. You can get pretty much the same results from an AIML chatbot - chunk the text into words and then do a search and replace for phrases. There's no intelligence or understanding involved - there's no semantic processing occurring. Semantics is what Kurzweil is all about - he wants to generate an algorithm that's capable of understanding all the semantic contortions involved in a phrase like "I'm sick and tired of being sick and tired." The algorithm has to have a large foundation of semantic knowledge to make sense of that phrase - and others like it.

Contextual semantics in written text. Incomplete sentences.

Humans can make sense of those things by filling in the blanks with things we learn to be likely, but we have a *huge* foundation of semantic knowledge to draw from. Doug Lenat's Cyc is a good example of the difficulty of the problem - millions of hand coded instances of semantic problems encoded in a giant LISP expert machine. It's taken decades of development to get where it is, and it's very impressive. It's also about as smart as a brick. (Ok, maybe that's harsh, but compared to human intelligence, it's a brick.)

Kurzweil has the type of mind that can frame a problem very well - and Google has the resources to find people to solve problems that are well framed.

Re:Hell, We Already Have a Perfectly Good Translat (0)

Anonymous Coward | about 2 years ago | (#42359061)

English to Chinese is already sorted too.

Paste any text into notepad and hit the following keys: Ctrl-H, L, [tab], R, Alt-A, Escape

Hired visionary for future not past (0)

Anonymous Coward | about 2 years ago | (#42355707)

Well Kurzweil has previously been involved in a lot of stuff, scanners, then he improved OCR quite a bit, then onto speech recognition.

You don't hire people for what they've done, you hire them for what they WILL do. So they presumably hired him because he'll do stuff they didn't think of, exactly because they wouldn't think of it.

As a first test, how about *programming* languages (0)

Anonymous Coward | about 2 years ago | (#42355835)

Call me when my C programs can be translated into a web environment with "one click". Yes, I know it can be done to some extent already; but probably not "one click", and probably not with reasonable translations of some subtle aspects of the code that might actually matter a lot to the user. Now try doing this the other way, making some cobbled-together PHP web BS run locally to the extent that it doesn't depend on actually having access to the 'net. Now... Perl. That should bring him to his knees; and these are all languages designed by engineers that are, to some extent, intended to be translated since compilation and interpretation are a kind of translation.

Replace norvig? (0)

Anonymous Coward | about 2 years ago | (#42355881)

A simpler explanation would be they need a replacement for Norvig. Times have changed and instead of a technical person, they need a media person.

accent? (1)

WGFCrafty (1062506) | about 2 years ago | (#42355927)

How would google translate speak with an accent? As far as I'm aware the accent is unique from the actual language, it would need to learn by listening to phrases that have define accents/dialects. Otherwise it wouldn't have that information using translation algorithms alone. If this is incorrect, please explain.

Huffpoo link (1)

codepunk (167897) | about 2 years ago | (#42355937)

No thanks, will not get a click from this guy.

Hearing aids? (1)

flokati (926091) | about 2 years ago | (#42356159)

"[A technology that] turns up the volume for an elderly person with hearing loss."

...

A hearing aid?

Re:Hearing aids? (1)

gweihir (88907) | about 2 years ago | (#42357305)

"[A technology that] turns up the volume for an elderly person with hearing loss."

...

A hearing aid?

Not good enough. Hearing aids work well, are well understood and solve the problem in a satisfactory and cost-efficient way. How can you be satisfied with so little?

#pragma sarcasm off

I think your comment describes exactly what is going on here.

This is an old problem (0)

Anonymous Coward | about 2 years ago | (#42356243)

This is an old problem. It was old when the solution came to me sitting on a bus with an empty mind in 1978. I destroyed my work a few months later once I realised the implications. After decades of consideration I believe the ugly has to be faced and dealt with. This is a good thing for those to come and a bad thing for those here now.

I can sound intelligent then (1)

Walter White (1573805) | about 2 years ago | (#42358237)

Just by voicing my words with a British accent!

The true meaning of AI. ;)

Why undermine credibility by hiring a crackpot? (1)

Anonymous Coward | about 2 years ago | (#42358275)

What benefit is it to Google to hire a crackpot who is known for being high-profile and vocal about his crackpottishness, and who has made a career out of being a media personality promoting himself? Whatever benefit Google gets from his actual work is more than overshadowed by having their brand associated with a crackpot. Most businesses don't want to touch something toxic like that.

The moment Google jumped the shark (0)

Anonymous Coward | about 2 years ago | (#42360109)

If I had any money, I'd sell my Google shares now. Kurzweil is a dingbat.

RAy was brilliant A.I. guy before (1)

peter303 (12292) | about 2 years ago | (#42360807)

he got on the somewhat kooky Singularity and Immortality bandwagons. He did a lot of the early work in optical recognition and voice interfaces pretty much on his own.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?