Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Automatic Translation Without Dictionaries

Soulskill posted about a year ago | from the baby-steps-to-the-universal-translator dept.

Technology 115

New submitter physicsphairy writes "Tomas Mikolov and others at Google have developed a simple means of translating between languages using a large corpus of sample texts. Rather than being defined by humans, words are characterized based on their relation to other words. For example, in any language, a word like 'cat' will have a particular relationship to words like 'small,' 'furry,' 'pet,' etc. The set of relationships of words in a language can be described as a vector space, and words from one language can be translated into words in another language by identifying the mapping between their two vector spaces. The technique works even for very dissimilar languages, and is presently being used to refine and identify mistakes in existing translation dictionaries."

cancel ×

115 comments

Sorry! There are no comments related to the filter you selected.

My hovercraft is full of eels! (5, Funny)

Anonymous Coward | about a year ago | (#44981553)

My nipples explode with delight!

Re:My hovercraft is full of eels! (1)

Brad1138 (590148) | about a year ago | (#44981927)

LOL, I was just watching my Monty Python DVDs last week and saw that episode. Very Funny.

Re:My hovercraft is full of eels! (1)

Alsee (515537) | about a year ago | (#44982791)

This method is mÃfÆ'à © complÃfÆ'à throughly cromulent! I always use Google Translate to exÃfÆ'à © cuter all my messages through Slashdot franÃfÆ'à Ãf  ais Ãf then English. You can not tell me mÃfÆ'à  at all.

-

And what's the algorithm complexity? (1)

d33tah (2722297) | about a year ago | (#44981563)

Well, that sounds quite cool, but also makes me wonder how does the algorithm tell wrong associations from the good ones. These things can easily go up to n^2 complexity.

Re:And what's the algorithm complexity? (1)

d33tah (2722297) | about a year ago | (#44981569)

(I meant O(n^2) memory complexity.)

Re:And what's the algorithm complexity? (-1, Troll)

AlphaWoIf_HK (3042365) | about a year ago | (#44981659)

By my cock, your ass is now a cum faucet!

Re: And what's the algorithm complexity? (0)

Anonymous Coward | about a year ago | (#44981831)

You win the internets

Re:And what's the algorithm complexity? (0)

Anonymous Coward | about a year ago | (#44981699)

Nested loops are always a great idea. Puts you on the dark side of the O chart. Above linear space/time.

You act so big but tell me this, since your an awesome algorithm developer, can you tell me for sure you never used NESTED loops? Because once you nest a loop you are in the above linear time/space terratory. Which is BAD.

Linearithmic (1)

tepples (727027) | about a year ago | (#44982431)

But there's a space between linear and quadratic called linearithmic, or O(n log n). Merge sort uses nested loops and lies in this space.

Re:And what's the algorithm complexity? (4, Funny)

SuricouRaven (1897204) | about a year ago | (#44982187)

Statistical translation is always going to have issues like that, but it can perhaps reach the 'good enough' point to hold a conversation with.

I can easily see it getting confused by formal vs informal use. If it goes on association, eventually it's going to get 'lawyer' and 'extortionist' confused.

Re:And what's the algorithm complexity? (4, Funny)

Anonymous Coward | about a year ago | (#44982359)

I too get lawyer and extortionist confused.

Re: And what's the algorithm complexity? (0)

Anonymous Coward | about a year ago | (#44984011)

I'm pretty sure Google translate has used this for a while. It used to make amusing mistakes of this sort, translating Bush into Blair for instance, or inserting "God save the Queen" into a translation of the Irish national anthem.

Re:And what's the algorithm complexity? (1)

FatdogHaiku (978357) | about a year ago | (#44982523)

Awl hour go rhythms spume pizza!

Pun + Her attitude arbitrary pleases me too. (1)

mynamestolen (2566945) | about a year ago | (#44981583)

Neither the article or PDF contain the word "pun". We're still a little way off. But hopefully we'll get better than this attempt from google translate: = She turned me off with her bossy manner. but Google translate gets OPPOSITE meaning. saying "Her attitude arbitrary pleases me too."

Re:Pun + Her attitude arbitrary pleases me too. (2)

mynamestolen (2566945) | about a year ago | (#44981589)

hmmm?? slashdot doesn't easily accommodate unicode.

Re:Pun + Her attitude arbitrary pleases me too. (2)

Kjella (173770) | about a year ago | (#44982405)

Welcome to /. where we still party like it's 1999. We'll have colonies on Mars before this site gets unicode support.

Re:Pun + Her attitude arbitrary pleases me too. (2)

tepples (727027) | about a year ago | (#44982469)

Slashdot has a fairly strict code point whitelist because there were problems in the past with trolls using directionality override characters to break Slashdot's layout and big blocks of foreign characters to make not-ASCII ASCII art.

make that the cat wise! (0)

Anonymous Coward | about a year ago | (#44981585)

And would that really work? Make that the cat wise!

Re:make that the cat wise! (0)

Anonymous Coward | about a year ago | (#44981613)

You must be Dutch, because that hit like a rod on a pig.

Re: make that the cat wise! (2, Funny)

Anonymous Coward | about a year ago | (#44981771)

Yes exactly. For sayings google translate works not so good now. But perhaps with this technique it will be to plums in the future.

Re: make that the cat wise! (1)

Anne Thwacks (531696) | about a year ago | (#44983955)

Since the TV subtitles have so many errors that they are impossible for humans to understand, I cant see this working in my lifetime. I suspect the cat is a weasel, rather than wise.

how would (1)

ozduo (2043408) | about a year ago | (#44981601)

'tight pussy" be translated?

Re:how would (5, Funny)

Anonymous Coward | about a year ago | (#44981669)

how would 'tight pussy" be translated?

"Tight pussy" would be translated automatically, and without dictionaries. This is answered right in the headline.

Re:how would (4, Funny)

Jane Q. Public (1010737) | about a year ago | (#44981973)

"tight pussy" be translated?

"The cat has drunk a saucer of wine."

Re:how would (2)

SuricouRaven (1897204) | about a year ago | (#44982199)

Depends on source corpus. If they trained it using one of the usual formal collections of publications, it would only have built up associations based on the slang-free usage and so would translate it as 'Tight cat.' If they have instead fed it a broader selection, perhaps culled from a web spider, it may pick up the other meaning.

Re:how would (1)

narcc (412956) | about a year ago | (#44984133)

it may pick up the other meaning

In a manner of speaking. The actual meaning of the words is completely irrelevant.

Re:how would (1)

smallfries (601545) | about a year ago | (#44984391)

Dat cat was good to roll with, yo?

Sounds good, but we need a robust plug (-1)

caseih (160668) | about a year ago | (#44981609)

And despite the Anonymous Coward calling me an apple fanboy for saying it, the Micro USB connector just isn't good enough for cell phone use as a charger. At first I thought it was a great idea, but it gets full of lint (and in some conditions dust and dirt) easily and doesn't always make a good connection when the springs in the cords start to get weak.

Re:Sounds good, but we need a robust plug (4, Funny)

Finallyjoined!!! (1158431) | about a year ago | (#44981743)

it gets full of lint

What's it got in its pocketses?

Re:Sounds good, but we need a robust plug (2)

caseih (160668) | about a year ago | (#44981759)

Agg. firefox put me on the wrong story... bye bye karma

Re:Sounds good, but we need a robust plug (3, Insightful)

icebike (68054) | about a year ago | (#44981799)

Firefox had nothing to do with it.
It was PEBCAK, pure and simple.

Re:Sounds good, but we need a robust plug (2)

plover (150551) | about a year ago | (#44985605)

With this story being about automated translations getting it very wrong, there was a 95% chance people would have thought you were just making a joke about Apple doing language translations!

If you had posted a follow up like "That's what Apple translate gets when I wrote 'Orchards of apple trees have fans to spray microscopic poison dust on all trees', it would have been perfectly believable.

'Cat'?! (-1)

Anonymous Coward | about a year ago | (#44981629)

... a particular relationship to words like 'small,' 'furry,' 'pet,' etc..

My first though was 'vagina'.

Darmok and Jalad at Tanagra (4, Interesting)

Vanders (110092) | about a year ago | (#44981675)

Finally, the team point out that since the technique makes few assumptions about the languages themselves, it can be used on argots that are entirely unrelated.

Once again, Star Trek is ahead of the curve.

Re:Darmok and Jalad at Tanagra (2)

Samantha Wright (1324923) | about a year ago | (#44982879)

Incidentally, real life caught up [lbgale.com] —fortunately there's not much worth translating with such a low-bandwidth form of communication.

Re:Darmok and Jalad at Tanagra (0)

Anonymous Coward | about a year ago | (#44984919)

More important is the need to grasp the idea of intra-language translation as compared with inter-language translation. That is, considering possible ways of reexplaining something in the same language prior to a translation of the type described here. Meanings could be thought of as limit points of nets of explanations to create a topologised space of concepts, and then rather than just linear transforms, one could consider those which are nice to such a topology. Not a good thing for the actual computation, but could be useful in the underlying theory.

Re:Darmok and Jalad at Tanagra (1)

epine (68316) | about a year ago | (#44983155)

Once again, Star Trek is ahead of the curve.

If you don't count noticing that the gear cogs of the antikythera could be made ever smaller and smaller by ongoing advances in Swiss craftsmen 1600 years later, then Star Trek was indeed ahead of its time in guessing that a large phone might become a small phone with batteries (the Baghdad Battery [wikipedia.org] dates to roughly the same age as the antikythera) and a radio (1887) carried by some exotic flux such as neutrinos (as named by Fermi in 1933).

Hmmm... (1)

freshlimesoda (2497490) | about a year ago | (#44981715)

Makes me think about hash functions and flash storage and data interoperability..... future..

Hofstadter? Isn't this AI, not translation? (5, Interesting)

Etcetera (14711) | about a year ago | (#44981733)

Reminds me a lot of the Fluid Concepts and Creative Analogies [amazon.com] work that Hofstadter led back in the day.

I don't see this directly working for translation into non-lexographically swappable languages (eg, English -> Japanese) very well, because even if you have the idea space mapped out, you'd still have to build up the proper grammar, and you'll need rules for that.

That being said.... Holy cow, you have the idea space mapped out! That's a big chunk of Natural Language Processing and an important step in AI development. ... Understanding a sentence emergently in terms of fuzzy concepts that are an internal and internally created symbol of what's "going on", not just using a dictionary and CYC [wikipedia.org] -like rules to figure it out, seems like a useful building block, but maybe I'm wrong.

Very cool stuff. Makes me want to go back and finish that CS degree after all.

Re:Hofstadter? Isn't this AI, not translation? (2)

mozumder (178398) | about a year ago | (#44981763)

I'm trying to figure out what the "space" is in the first place?

What do the axis of the graph represent?

It would be funny if "meaning" could be quantified into meaningless numbers. That would piss off anyone that believes there's a meaning to life. haha.

Re:Hofstadter? Isn't this AI, not translation? (0)

Anonymous Coward | about a year ago | (#44981989)

It would be funny if "meaning" could be quantified into meaningless numbers. That would piss off anyone that believes there's a meaning to life. haha.

42 is NOT a meaningless number.

Re:Hofstadter? Isn't this AI, not translation? (4, Interesting)

phantomfive (622387) | about a year ago | (#44982089)

I don't see this directly working for translation into non-lexographically swappable languages (eg, English -> Japanese) very well, because even if you have the idea space mapped out, you'd still have to build up the proper grammar, and you'll need rules for that.

According to the paper, this translation technique is only for translating words and short phrases. But it seems to work well for languages as far apart as English and Vietnamese.

Re:Hofstadter? Isn't this AI, not translation? (1)

infinitelink (963279) | about a year ago | (#44983639)

[...] Holy cow, you have the idea space mapped out! That's a big chunk of Natural Language Processing and an important step in AI development. ... Understanding a sentence emergently in terms of fuzzy concepts that are an internal and internally created symbol of what's "going on", not just using a dictionary and CYC [wikipedia.org]-like rules to figure it out, seems [...]

Like not enough given the symbol-grounding problem.

Re:Hofstadter? Isn't this AI, not translation? (1)

narcc (412956) | about a year ago | (#44984271)

Understanding a sentence emergently in terms of fuzzy concepts that are an internal and internally created symbol of what's "going on", not just using a dictionary and CYC [wikipedia.org]-like rules to figure it out, seems like a useful building block

Yeah, that's not what's happening at all.

Load of bollocks (1)

Skiron (735617) | about a year ago | (#44981793)

OK, I am just having a fag. I bet that will bugger it up.

Re:Load of bollocks (1)

denzacar (181829) | about a year ago | (#44981939)

Cats still dig fags?

Re:Load of bollocks (0)

Anonymous Coward | about a year ago | (#44982569)

Yep they're totally tubular.

Interesting approach (0)

Anonymous Coward | about a year ago | (#44981795)

Now assign each word/phrase a certainty/confidence value, and apply your new algorithm only to words/phrases for which a literal (i.e. dictionary) translation has a low degree of confidence.

Much appreciated,
Multilingual speakers

Summary wrong (again) (1, Flamebait)

icebike (68054) | about a year ago | (#44981825)

Simply because you embed your dictionary in something you choose to call a vector doesn't make it any less of a dictionary.

Its still a dictionary, and also a thesaurus. Come to think of it a thesaurus is simply a meaning vectored dictionary.
What's old is new again.
Mathematicians, late to the party, still trying to drink all the punch.

Re:Summary wrong (again) (0)

Anonymous Coward | about a year ago | (#44981997)

Also, crediting them with moving us past human translation overlooks that fact that machine translation has been happening for decades. Yay to the authors for a step forward; boo to the submitter for repeating the every-cool-new-thing-changes-the-world fallacy.

Re:Summary wrong (again) (4, Insightful)

hey! (33014) | about a year ago | (#44982131)

Simply because you embed your dictionary in something you choose to call a vector doesn't make it any less of a dictionary.

True, but calling a dictionary a vector space doesn't make it so. For example how "close" are the definitions of "happiness" and "joy"? In a dictionary, the only concept of "closeness" is the lexical ordering of the word itself, and in that sense "happiness" and "joy" are quite far apart (as far apart as words beginning h-a are from words beginning with j-o are in the dictionary). But in some kind of adjacency matrix which show how often these words appear in some relation to other words, they might be quite close in vector-space; "guilt" and "shame" might likewise be closer to each other than either is from "happiness", and each of the four words ("happiness", "joy", "guilt", "shame") would be closer to any other of those words than they would be to "crankshaft"; probably close to "crankshaft" (a noun) than they'd be to "chewy" (an adjective).

Anyhow, if you'd read the paper, at least as far as the abstract, you'd see that this is about *generating* likely dictionary entries for unknown words using analysis of some corpus of texts.

Re:Summary wrong (again) (1)

abies (607076) | about a year ago | (#44986015)

I think that joy is quite close to chewy (through bubblegum and caramel for example). Of course, I believe some people may get more joy from playing with well oiled crankshaft, but that's a personal preference ;)

2003 called and wants its news back. (0)

Anonymous Coward | about a year ago | (#44981851)

Um... while it is awesome and works, uh the translator has for over about a decade. Heck the original Babel Fish (used by Yahoo, Overture, Altavista) were the first with such concepts pre 2000.

Cat (0)

Anonymous Coward | about a year ago | (#44981853)

Cat, associated with:

Big
Steel
Heavy
Wheels
Track
Blade

Re:Cat (3, Insightful)

blue trane (110704) | about a year ago | (#44982107)

jazz musician

Re:Cat (2)

dkleinsc (563838) | about a year ago | (#44982809)

Rimmer, Lister

Re:Cat (0)

Anonymous Coward | about a year ago | (#44982587)

bash
pipe
echo

Cat, associated with:

Big
Steel
Heavy
Wheels
Track
Blade

Dolphinese Will Now Be Understood (4, Funny)

MacroSlopp (1662147) | about a year ago | (#44981867)

With this technology we should be able to understand Dolphin-talk.
It should also allow us to detect future ape rebellions before they happen.

Re:Dolphinese Will Now Be Understood (1)

Anonymous Coward | about a year ago | (#44981933)

Thanks for all the fish!

Re:Dolphinese Will Now Be Understood (2)

Vanders (110092) | about a year ago | (#44982017)

This has to be done. [youtube.com]

Re:Dolphinese Will Now Be Understood (1)

SuricouRaven (1897204) | about a year ago | (#44982227)

It's already been partially decoded.

Most of the calls are individual identifiers unique to the individual. Makes a lot of sense. A dolphin pod is essentially free-floating a lot of the time in an ocean with no navigational markers and little indication of direction. They need some way to track each other to keep the group from getting split up.

Humangrunt Will Now Be Understood (0)

Anonymous Coward | about a year ago | (#44982619)

It's already been partially decoded.

Some of the calls are individually identifiable, unique to the individual. The rest doesn't make much sense. A human pod is essentially land bound a lot of the time but spread apart with no inherent electromagnetic navigation organs and little of import to say. Still, they babble on to track each other after splitting up or silently stalk each other online.

Re:Dolphinese Will Now Be Understood (1)

schlachter (862210) | about a year ago | (#44982759)

wrt your first sentence. i don't think this is funny at all. it's an amazing opportunity.

the spirit is willing but the flesh is weak (0)

Anonymous Coward | about a year ago | (#44981943)

Was the know-nothing reporter using/regurgitating something that was mistranslated?

You WILL have to convert the old word to the one from the new langauage - THAT takes a dictionary operation.

The problem is, that old word may translate into many different possible new word or phrase and the difficulty is WHICH new verbage is correct.

Whats being talked of IS combining the basic translation lookup (dictionary) with some extra association information (context of adjacent) to try to pick the RIGHT translation and resulting new words.

SO its pretty farging stupid declaring this is 'dictionaryless' .

The word association link info can be a thousand-fold increase in the information such a translation database would need to maintain (and is largely what such real translator efforts have been doing in the past 50 years).

Re:the spirit is willing but the flesh is weak (3, Interesting)

icebike (68054) | about a year ago | (#44982083)

Yes, the pretty vectors (nothing but lists of words) still have to be assembled by humans for the most part. Maybe not EVERY association, but enough of them such that you can build relationships and associations in-directly, and achieve a round-about translation, even if you end up having to go through 2 or 3 related languages to get there.

After a few words of context are translated you can, perhaps deduce the rest. But the idea you can do so without a dictionary is ridiculous. And putting your dictionary into digital forms and calling it a vector doesn't change the fact that you still have a dictionary associating an english word with a french word and a Mandarin word.

Isn't that pretty much how Google Translate works? (1)

Anonymous Coward | about a year ago | (#44981959)

Tomas Mikolov and others at Google have developed a simple means of translating between languages using a large corpus of sample texts. Rather than being defined by humans, words are characterized based on their relation to other words.

Like how Google Translate have noticed that Danish domain names ends in "dk" and therefore translates "dk" to "com" with "uk", "gb" and "en" as some of the other suggestions?

Sometimes "a simple means" can be too simple.

Old idea, new implementation? (5, Interesting)

Theovon (109752) | about a year ago | (#44982003)

When I was in grad school, studying linguistics, compitational linguistics, and automatic speech recognition, I recall it mentioned more than once the idea of using latent semantic analysis and such to do this kind of translation. So am I correct in assuming that this hasn't been done well in the past, and Google finally made it work well because they have larger corpora of translated texts?

Re:Old idea, new implementation? (1)

schlachter (862210) | about a year ago | (#44982763)

yeah, it's about all these different corpuses coming online and being available to a single group, especially because in order to train, they need a one to one translation of a single doc. like a gov doc that's in both spanish and english is great fodder for the algorithm.

Re:Old idea, new implementation? (0)

Anonymous Coward | about a year ago | (#44983251)

Funny, in philosophy we call it post-modernism. Heidegger was good at it.

Re:Old idea, new implementation? (0)

Anonymous Coward | about a year ago | (#44983581)

I was under the impression that LSA was patented by Bell Labs. Perhaps the patent has expired?

It is surprising that the (Google) authors don't reference LSA. LSA was the first thing I thought of when I read the Slashdot description. Or is this another case of computer scientists reinventing the wheel without being aware of work done in other disciplines (often many years earlier).

I also wonder how well this works with Hungarian, which has no verb "to have", and English, which does. Or any other pair of languages coming from different language families, e.g, Indo-European (English or Czech) and Finno-Ugric (Hungarian or Finnish). There seem to be some similarity assumptions about vector spaces that are not necessarily true across language families (among other things).

Re:Old idea, new implementation? (1)

k.a.f. (168896) | about a year ago | (#44984505)

When I was in grad school, studying linguistics, computational linguistics, and automatic speech recognition, I recall it mentioned more than once the idea of using latent semantic analysis and such to do this kind of translation. So am I correct in assuming that this hasn't been done well in the past, and Google finally made it work well because they have larger corpora of translated texts?

You are utterly correct. The idea of machine translation by looking up each word in a dictionary and shuffle the result around was big in the 1950s, but hasn't been since then. It became all too clear very early that this isn't the way to produce texts that a native speaker would ever say (or even comprehend). The barrier to doing this kind of context-dependent analysis was that the hardware wasn't there for a long time, and later the huge parallel corpora that are needed to make it work were missing. (Just think how many millions of words a child hears until it learns to speak fluently and effortlessly!) Now that both are there, of course Google is among the most successful implementors.

Old news (4, Informative)

richwiss (876232) | about a year ago | (#44982021)

This is old news, going back to 1975. Yawn. http://en.wikipedia.org/wiki/Vector_space_model [wikipedia.org]

Re:Old news (1)

smallfries (601545) | about a year ago | (#44984399)

That is something differnent, each document is a vector and each word is a dimension.

StarTrek Universal Translator (0)

Anonymous Coward | about a year ago | (#44982023)

Sounds very similar to StarTrek's universal translator. You only have to say about a dozen words to map the language right?

Re:StarTrek Universal Translator (1)

SuricouRaven (1897204) | about a year ago | (#44982235)

Unless the plot calls for a breakdown of communication, in which case the language will be too 'complex' for the universal translator.

Re:StarTrek Universal Translator (1)

jedidiah (1196) | about a year ago | (#44982357)

Damn you Darmok!

Darmok and Jihad at Viagra (1)

tepples (727027) | about a year ago | (#44982505)

The allusion-heavy Tamarian language [tvtropes.org] has real-world analogs, such as Tropese [tvtropes.org] and the tendency for users of sites closely linked to 4chan to talk in memes.

Re:Darmok and Jihad at Viagra (0)

Anonymous Coward | about a year ago | (#44982953)

Za Warudo, when the Snacks is back.

When and where matters (1)

gmuslera (3436) | about a year ago | (#44982371)

Meaning of words, and their translations, vary with time and location. Infering meanings from texts from 20 years ago or another country, state or even region inside a state, even if the language is the "same", could be risky. There had been a lot of marketing problems thanks to this kind of bad translation [socialnomics.net]

Like so many of these algorithms (3, Interesting)

holophrastic (221104) | about a year ago | (#44982427)

They do a great job of improving the precision of what used to be mediocre. And then, as a direct result, they not only make the errors worse, they make the errors undetectable.

CAT: small, furry, pet.
BIG CAT: big, furry, pet.

Um. Both are orange. One's a tabby. One's a tiger.

It's not good enough that your translation system has a 99% accuracy whereas the old one has a 90% accuracy. What matters is that the old one's 10% error rate sounded like an error (e.g. tiger becomes monster), whereas your new one's 1% passes the turing test and can't be discerned by an intelligent listener (e.g. tiger becomes tabby).

"My friend owns a monster." -- You friend owns what? I don't think you meant a monster. -- "eh, you know, a very big dangerous jungle cat" -- oh, like a lion -- "not a lion, it has stripes" -- oh, a tiger.

"My friend owns a tabby." -- Ok.

Re:Like so many of these algorithms (2)

flimflammer (956759) | about a year ago | (#44982717)

"My friend owns a monster." -- You friend owns what? I don't think you meant a monster. -- "eh, you know, a very big dangerous jungle cat" -- oh, like a lion -- "not a lion, it has stripes" -- oh, a tiger.

Do you frequently converse with machine translators that elaborate the meaning of their mistranslations? Would be interested in knowing which one is capable of that. See when I use them it's what-you-see-is-what-you-get and I have to pick at the original source text with a dictionary to learn monster actually means tiger. That they can nonchalantly narrow the meaning down for you in a Star Trek-esque computer conversation is leaps and bounds ahead of what I'm used to!

Sarcasm aside for a moment, you're actually complaining that machine translators may eventually get so convincing that you might not even notice the errors anymore? Really? Sign me up for that scenario. Nothing should replace native translators anyway for precision work.

Re:Like so many of these algorithms (1)

holophrastic (221104) | about a year ago | (#44982959)

That's almost my complaint. It's not that I won't notice the errors. It's that I won't notice the errors when they are spoken. I'll notice the errors when I get bitten by a tiger after reading a sign that says "beware of cat".

It's important for miscommunication to be identified during the communication protocol.

Re:Like so many of these algorithms (0)

Anonymous Coward | about a year ago | (#44983963)

Actually, I thought this was exactly how google translate taught itself languages. I remember how I tested when it was new by having it translate some article on cnn.com into my native Swedish. Since the Swedish translation of "President Bush met with blah, blah, blah" was "King Bush met with..." it seemed clear to me that the algorithm had automagically formed some internat representation for "head of state" but had yet to form the "subclasses" King and President of that "superclass".

Re:Like so many of these algorithms (0)

Anonymous Coward | about a year ago | (#44984175)

This looks like a place where Dr. Mueller's PSIMETRICA would be extremely useful. I believe the original work was used in relation to short term memory. Using the PSIMETRICA word dissimilarity calculation and applying it to a sentence, it could be used to predict what words are not applicable.

Re:Like so many of these algorithms (0)

Anonymous Coward | about a year ago | (#44985727)

I'm not too worried. Just translate back into the source language and see if it still means the same thing. The translator might translate "tiger" into "monster", but it's unlikely to then translate "monster" back into "tiger".

If you know 2 languages well, you can write in one language, translate to the language that you don't know and then translate that to the other language that you know. If the input and output mean the same thing, probably the translation in the middle was OK. If they don't mean the same thing, then rephrase the input until they do. If you can't get a good result like that, swap the two languages that you know and see if the translation software can do things better in that direction. You can still do the thing you'd have done if you only knew one language. Repeat for each sentence in the document that you are translating.

  If you know 3 languages, this will all work even better. You'll get 3 chances at getting a good translation instead of 2 and you'll get 3 different verifications of the translation. This is all even better if one of the 3 languages is completely unlike the other 2.

Still needs dictionaries (2)

raju1kabir (251972) | about a year ago | (#44982597)

Anyone who regularly uses Google Translate has seen the problems that come with this approach.

It "translates" analogous terms in ways that make no sense. Translate "Amsterdam" from Dutch to English and it often gives you "London". Same with kilometres / miles, and other things that significantly change the meaning of the text.

With some hand-crafted guidance, the outcome can be much less useful than the more rough-sounding word-by-word machine translations from days of yore.

Re:Still needs dictionaries (1)

sourcerror (1718066) | about a year ago | (#44984725)

On the other hand it's much better at translating idioms or expressions where the component words have a lot of different meanings.

Re:Still needs dictionaries (0)

Anonymous Coward | about a year ago | (#44984761)

Google Translate isn't using "this approach" at all.

Translate is built on parallel texts, hence the Amsterdam -> London thing. A typical source of parallel texts is corporate documents. The Dutch documents say mostly the same things as the English documents, but the Dutch headquarters is in Amsterdam, while the English one is in London, so the parallel text analysis concludes "London" is English for "Amsterdam".

This system isn't about parallel texts, it will introduce a whole new type of errors.

Synonyms (1)

manu0601 (2221348) | about a year ago | (#44982651)

I wonder how they handle synonyms, which may be much more prevalent in a given language from another one.

If the destination language is poorer in synonyms than the source language, this is straightforward, and that automatic translation will just miss subtle points that cannot be translated without a periphrase. In the opposite case, which is moving from synonym-poor language to a synonym rich language, the computer needs to choose the right word, and doing so requires some understanding of the context.

And the problem exists beyond synonyms with sentence structures. Let us take the english sentence "We will give territories". In french it could become "Nous cèderons des territoires" (We will give some territories) or "Nous cèderons les terriroires" (We will give the territories). What should be chosen? It depends of the context, something the computer may have a hard time to grasp.

Re:Synonyms (2)

Panoptes (1041206) | about a year ago | (#44982819)

Synonyms are only the tip of the iceberg: there are so many other problem areas. Collocations (words that 'go together'): we can say a 'tall boy', but not a 'high boy'; 'a large beer', but not 'a big beer'. Connotations (attitudes, feelings and emotions that a word acquires): compare 'a slim girl' with 'a skinny girl'. Idioms: 'hot potato' and 'red herring' cannot be translated directly into any another language. Add irony and sarcasm to the mix, class and regional usage, dialects, diglossia (for example, demotic and classical Arabic), puns and plays on words - the list goes on. Machine translation is a chimera.

Re:Synonyms (2)

manu0601 (2221348) | about a year ago | (#44983353)

I understand that collocation are adressed by their model: they study texts to discover that 'boy' may be preceded by 'tall' but not by 'high', and that in french, 'garçon' may be preceded by 'grand' but not 'haut'. That enables them to translate without a hitch.

But even adjectives handling may come with traps. Adjectives in french may appear before or after a noun. You may say 'un grand garçon' or 'un garçon grand', the meaning is the same most of the time. But there are exceptions! 'un type pauvre' is a poor guy, 'un pauvre type' is a mediocre person. Even the 'grand garçon' vs 'garçon grand' may carry subtle difference, as a father will tell his son he is 'un grand garçon' now (which means he is not a child anymore), but he will probably not tell him he is now 'un garçon grand' (which just mean he is tall). I guess this can be handled by their statistical model, but at some time they will need to add some logic to handle it. I guess it falls in the idiom category.

Puns and irony are probably the most difficult part of the game. Even human translator have a hard time with them

Re:Synonyms (0)

Anonymous Coward | about a year ago | (#44984755)

But that seems only relevant for grammatical correctness and flow of language.
For actual understanding 'high boy' works just as well as 'tall boy' unless you hang around drug addicts much. 'big beer' and 'large beer' could be used interchangeably without causing confusion.
This means that if you have an unknown language where no dictionary is available you can trow it at this translation and get a reasonably good translation. Apart from that this method probably still needs a starting point, but perhaps it could be used to automatically expand a dictionary given a very small dictionary and a lot of sample texts.

Would be interesting to throw some noise from SETI at it and see what happens.

20 questions? (0)

Anonymous Coward | about a year ago | (#44983005)

So is it a game of 20 questions, with each answer projecting out one or more dimensions?

What I want from a translator (1)

snadrus (930168) | about a year ago | (#44983289)

1. Rough word-by-word is the beginning
2. Sentence structure reorganization
3. Idiom recognition.
4. Connotation, Tone, Irony
5. Generation / Area / Nature: How a native listener can determine details about the speaker.

The result will always be annotated-looking with warnings for plays-on-words, and will always be longer with maximum detail extraction from the source language.
I'm sure there's more to do after these items are done.

nice idea, but (0)

Anonymous Coward | about a year ago | (#44983437)

babelfish will be "created" through crossbreeding and genetic experimentation long before the language barrier on this planet is gone...

Another way of understandling language translation (1)

beachdog (690633) | about a year ago | (#44983737)

"Vector spaces" is the heart of the Google proposal. Previous posters have disassembled the weaknesses pretty well.

The thing a "vector spaces" analysis needs is specific vector mapping based on the sounds of speech, the rythmns of a language, the breathing of the speaker and the physical proximity parts of the brain associated with hearing and parts of the brain associated with speech.

Multiple languages exist because the growing infant's brain organizes the sounds it hears by passing the neural sensations through many layers of pattern forming and recognition processes. Multiple languages and the ambiguities in languages means the language learning process within a developing child has some features that are quite consistent, like saying "ma ma". The rest of language aquisition spreads out in the physical vector space of the topology of the brain. Italian has been noted as wonderful for singing, Spanish as good at expressing emotion. Perhaps these languages follow slightly different paths in the brain.

An idea I picked up from digital ham radio tutorials is quadrature phase demodulation. It extracts data from a carrier signal, it looks simple, it looks like you could do it with nerve cells and it associates nicely with known large scale brain electrical activity.

I work with severely disabled kids. Language aquisition or finding work arounds for missing or weak parts of the language pathway is an interesting challenge. A fellow who stone facededly ignored my spoken words laughed at me and smiled when I began signing to him in pigin half made up American Sign Language.

Star Trek Universal Translator anyone? (1)

Anonymous Coward | about a year ago | (#44984027)

Looks like this could be the beginning of a Universal translation scheme. Next all we need is to add voice recognition to this and Star Trek tech comes alive once again!

Computational Linguistics (1)

Anonymous Coward | about a year ago | (#44984417)

In recent times there are regularly articles in technology magazines about topics in computational linguistics (CL) that are blatantly ignorant of the current research. This is just another example.

The time that dictionaries are used for applied machine translation is already history since 10 years. Statistical machine translation (SMT) and the techniques described here have not been developed by google. In fact the basic idea of SMT is over 25 years old and distributional semantics is over 50 years old. Phrase tables for SMT are nothing new and always contained these properties of distributional semantics (DS) which are tightly connected to vector space models (VSM) and the next step to merge VSM and SMT is just the next logical step.

If you find that topic interesting search for "statistical machine translation" "phrase tables" "distributional semantics". Have fun the next 2 years reading all the stuff.

English(Chinese(X)) ==... (0)

Anonymous Coward | about a year ago | (#44984849)

English(Chinese(Input)) became: "I strained my friend's cat loves the taste of sausage meat." Have a guess as to the original sentence.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>