×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Unicode 7.0 Released, Supporting 23 New Scripts

Unknown Lamer posted about 5 months ago | from the live-long-and-pigeon-pepper dept.

Technology 108

An anonymous reader writes "The newest major version of the Unicode Standard was released today, adding 2,834 new characters, including two new currency symbols and 250 emoji. The inclusion of 23 new scripts is the largest addition of writing systems to Unicode since version 1.0 was published with Unicode's original 24 scripts. Among the new scripts are Linear A, Grantha, Siddham, Mende Kikakui, and the first shorthand encoded in Unicode, Duployan."

Sorry! There are no comments related to the filter you selected.

Seriously? (4, Funny)

newsman220 (1928648) | about 5 months ago | (#47250333)

Still no Klingon?

Klingon in more useful (1)

Anonymous Coward | about 5 months ago | (#47250385)

Seriously, there are Klingon speakers. I worked with three, one of whom didn't know the other two knew Klingon until he cursed in Klingon. It was surreal. Linear A is an absolutely fascinating script (with hundreds of symbols), but there just aren't enough extant samples to justify adding it to Unicode, and nobody can translate it.

(Yes, that was a weird job. I left as soon as I could, though not due to the Klingons, but management.)

Re:Klingon in more useful (2)

LordLucless (582312) | about 5 months ago | (#47250697)

but there just aren't enough extant samples to justify adding it to Unicode, and nobody can translate it.

Unicode is supposed to be universal, and it has more than enough codepoints to spare - why is there a problem adding it? I'm sure having it in a standard encoding would prove useful to anyone who is trying to translate Linear A, or to archeologists/historians looking to digitize fragments we do have, etc.

The larger, the less useful (1)

Anonymous Coward | about 5 months ago | (#47250765)

The larger Unicode becomes, the more fragmented the implementations will be. The more fragmented it is, the more errors and incompatibilities will compound. It will get less and less useful, and more and more bulky, and will eventually be as useful as Flash. (well, it may not be that bad, but still, Flash was all things to all people, and almost universally installed, until it wasn't.

less useful how? Re:The larger, the less useful (4, Interesting)

Fubari (196373) | about 5 months ago | (#47251149)

Fragmented? I haven't heard of any unicode forks. The people at the Unicode_Consortium [wikipedia.org] seem like they're doing ok. Unicode seems pretty backwards compatible; have any of the the newer versions overwritten or changed the meaning of older versions (e.g. caused damage)? That isn't true for various ascii encodings, which is an i18n abomination on the hi-bit characters. Or with ebcdic, which isn't self compatible. One of the things I love about unicode is the characters (glyphs) stay where you put them, and don't transmute depending on what locale a program happens to run in.

The larger Unicode becomes, the more fragmented the implementations will be.

Maybe instead of fragmented, you mean there won't be font sets that can't render all of unicode's characters?
*shrug* Even if that were a problem, the underlying data is intact and undamaged and will be viewable once a suitable font library is obtained.

The more fragmented it is, the more errors and incompatibilities will compound. It will get less and less useful, and more and more bulky, and will eventually be as useful as Flash. (well, it may not be that bad, but still, Flash was all things to all people, and almost universally installed, until it wasn't.

Can you give me an example of an incompatibility? I'm not saying there are none, just that I don't know of anything and that, in general, I've been very pleased with Unicode's stability - compared to other encodings - for doing data exchange.

Re:less useful how? Re:The larger, the less useful (1)

Anonymous Coward | about 5 months ago | (#47252017)

BIDI is one of the weirder and more difficult parts of Unicode, and its semantics have not been 100% stable across versions.

http://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&id=Unicode5QuoteMirroring

In fairness, they did attempt to limit the damage, and on the whole, having a well-thought-out standard for BIDI, even if occasionally buggy, is better than not having one.

Re:less useful how? Re:The larger, the less useful (2)

BetterThanCaesar (625636) | about 5 months ago | (#47252293)

Unicode seems pretty backwards compatible; have any of the the newer versions overwritten or changed the meaning of older versions (e.g. caused damage)?

Yes. Version 2.0 completely changed the Hangul character set. Korean texts written with Unicode 1.1 were not readable in Unicode 2.0, and vice versa. This was 17 years ago, but note that it was after ISO had accepted version 1.1 as an ISO/IEC standard.

Re:less useful how? Re:The larger, the less useful (1)

Fubari (196373) | about 5 months ago | (#47256651)

Good to know; thanks.

Re:less useful how? Re:The larger, the less useful (4, Interesting)

AmiMoJo (196126) | about 5 months ago | (#47252449)

The main problem is the broken CJK (Chinese, Japanese, Korean) support that has caused numerous ad-hok work-arounds and hacks to be developed. In a nutshell all three languages shared some common characters in the past, but over time they diverged. Unfortunately these characters share the same code points in Unicode, even though they are rendered differently depending on the language. A Japanese and Chinese font will contain different glyphs for the same character.

It is therefore impossible to mix Chinese and Japanese in the same plain text document. You need extra metadata to tell the editor which parts need Chinese characters and which need Japanese. There are Japanese bands that release songs with Chinese lyrics and vice versa, and books that contain both (e.g. textbooks, dictionaries). Unicode is unable to encode this data adequately.

Even the web is somewhat broken because of this. If a random web page says it is encoded with Unicode there is no simple way for the browser to choose a Japanese, Korean or Chinese font, and all the major ones just use whatever the user's default is.

It really isn't clear how this can be fixed now. Unicode could split the code pages but a lot of existing software will carry on using the old ones. It's a bit of a disaster, but most westerners don't seem to be aware of it.

Re:less useful how? Re:The larger, the less useful (1)

Goaway (82658) | about 5 months ago | (#47252835)

That sucks, but it does not seem to be an example of what was asked for.

Re:less useful how? Re:The larger, the less useful (1)

ais523 (1172701) | about 5 months ago | (#47254963)

One situation I was wondering about for that problem was the use of Japanese/Chinese/Korean marks/overrides, the same way that there are LTR and RTL overrides. Choice of language for a particular ideograph seems to be much the same as choice of direction for an inherently undirectional character (you're interpreting the character differently depending on context). This also has the advantage of being pretty much backwards compatible.

Re:less useful how? Re:The larger, the less useful (1)

dillee1 (741792) | about 5 months ago | (#47254967)

That "divergence over time" actually occurs not that long ago. Right before WW2 everyone on the planet that use Chinese characters use the 1 and only 1 glyph, traditional Chinese. That includes China, Japan, Korea, Vietnam, Hong Kong, Macau, Taiwan.

After WW2 China and Japan tries to simplify the Chinese characters in separate effort, resulting in completely different glyphs and the shitty state of CJK coding we see now.

Korea and Vietnam largely abandoned Chinese characters, may be except for person and place names for clarification reasons.

Hong Kong, Macau, Taiwan all use the same pre-WW2 traditional Chinese glyphs. Thus they have no ambiguity or trouble for exchanging text at all.

FFS just use traditional Chinese glyphs if one want to exchange text with other kanji user. It is the "true" Chinese that everyone in Sinosphere understand for last 3000 years.

Re:less useful how? Re:The larger, the less useful (1)

Fubari (196373) | about 5 months ago | (#47256621)

r.e. CJK - that is interesting, and it is something I haven't interacted with directly. The collisions in mapping to unicode sounds like a *significant* headache. Thanks for the heads up (now I'm at least aware that I'm ignorant of this; a small step forward).

Re:less useful how? Re:The larger, the less useful (0)

Anonymous Coward | about 5 months ago | (#47252585)

Fragmented? I haven't heard of any unicode forks.

There's more to "fragmentation" than "code forks". Start with noticing that over half of libc is devoted to character handling largely thanks to the complexity unicode brings. This itself carries considerable costs. The more code, the more likely implementations elsewhere won't work quite like this one and so you need even more code to make your program work the same everywhere, by working around bugs. Code breeds code. While this may seem unlikely on the surface, it gets worse quickly, much worse, very quickly. See following:

Can you give me an example of an incompatibility? I'm not saying there are none, just that I don't know of anything and that, in general, I've been very pleased with Unicode's stability - compared to other encodings - for doing data exchange.

As noted elsewhere, it isn't stable. Latin-1 is latin-1. Unicode... is not so much unicode.

In fact, there isn't even a single canonical representation. Many characters have multiple representations, and then there's accents. There are at least two ways to create any accented character: (CHARACTER WITH ACCENT) and (CHARACTER) with (ACCENT SIGN), that's two code points. Then there's the visually-similar-but-wildly-different-codepoint problem. I haven't even started with things like sneakily inserted ZERO WIDTH SPACE CHARACTER and like variants. That's multiple problems that can bite you in multiple ways.

Such as how spotify accounts (that allow unicode usernames) could be hijacked. They used a "canonicalise" function that... uh... they just pulled from somewhere and wasn't up to snuff for the unicode version they actually were using. "Big deal", you say, but yes, it is, in multiple ways. For one, different code bases will have different implementations, giving rise to obscure bugs and perhaps security breaches.

Before that there's the encoding. Both utf-8 and utf-16 have non-trivial invalid encodings that can be... replaced by "invalid code point" markers (there are at least two official ones and many unofficial options), or ignored, or cause errors to be thrown up. Different input handlers will handle this differently. Yay, more differences in implementation.

But it gets bigger: The whole concept of a single representation is foreign to unicode. The aim is to capture all possible language, or thereabouts, but now you run into a similar thing that caused people to start to standardise language. In that sense, unicode is a de-standardisation effort, through standardisation, sure, but well-defined meaning out of possibly not so well-defined input gets that much harder, often in sneaky and counter-intuitive ways. Finding the problems might well depend on spotting minute differences between regional variant code points that may or may not show up differently in the font you're currently using.

So there may be one unicode, and it may be used to encoding a wide variety of meaning, but reliably getting the meaning back is actually getting harder for computer code, giving rise to yet more code, and added complexity, in multiple implementations, and all the fun and joy that brings.

Among other things it means that it's a poor choice to encode security-sensitive things like usernames and passwords (also assuming the input method required for all character sets used in your username and password is available everywhere you might ever want to log in), or URLs (even with the IDN subset), or security policies, or I don't know what else. Is this enough pointers for you?

Fragmentation - Ghost of Steve Jobs, is that you? (0)

Anonymous Coward | about 5 months ago | (#47251257)

It's a set of numbers from zero to 2^32 - 1 that map to symbols, with a well-defined way of displaying unrepresentable characters. How much more incompatibility or "fragmentation" can there be?

Re:Fragmentation - Ghost of Steve Jobs, is that yo (1)

kasperd (592156) | about 5 months ago | (#47252719)

It's a set of numbers from zero to 2^32 - 1 that map to symbols

Actually it only goes from 0 to 1114111, mainly because that's the range you can achieve with UTF16.

Re:The larger, the less useful (0)

Anonymous Coward | about 5 months ago | (#47251261)

The more fragmented it is, the more errors and incompatibilities will compound. It will get less and less useful, and more and more bulky, and will eventually be as useful as Flash.

As evidenced by the announcement blog entry itself, http://unicode-inc.blogspot.com.au/2014/06/announcing-unicode-standard-version-70.html, where they've mistakenly interchanged the glyphs for U+1F596 (raised hand with part between middle and ring fingers) and U+1F6E0 (hammer and wrench). If the Unicode Consortium itself can't even get it right then what hope for the rest of humanity?

Re:The larger, the less useful (0)

Anonymous Coward | about 5 months ago | (#47251463)

As evidenced by [incorrectly two placed graphic files entitled emoji-8.png and emoji-9.png] ... If the Unicode Consortium itself can't even get it right then what hope for the rest of humanity?

Nicely spotted. But evidence of fragmentation and incompatibility across the unicode space? Not so much.

Re:The larger, the less useful (0)

Anonymous Coward | about 5 months ago | (#47251513)

Err ... 'two' was incorrectly placed %). That should read "two incorrectly placed ..." (I hope the English language will survive that mistake.)

Re:The larger, the less useful (1)

gwgwgw (415150) | about 5 months ago | (#47251827)

That *is* pretty funny. That and it hasn't yet been corrected.

Re: The larger, the less useful (0)

Anonymous Coward | about 5 months ago | (#47251303)

I've heard breathing can eventually cause death, but I guess I'll throw your logic to the wind and do it anyway...

Re:The larger, the less useful (1)

cheater512 (783349) | about 5 months ago | (#47251661)

There is no such thing as fragmentation with Unicode. Most fonts only implement a small portion of it however.

If Microsoft and Apple both decide to implement 'Linear A' for example, they will do it with different fonts but using the same codepoints.

Re:Klingon in more useful (1)

rubycodez (864176) | about 5 months ago | (#47255165)

the problem is there is no klingon alphabet to add, just several fan made lists claiming to be that

so your advocating adding an act of fanservice to a fictional language by adding something to unicode for which the authors of the fiction themselves haven't even been arsed to make. that's beyond silly, that's like saying the next space shuttle should be shaped like the starship enterprise.

Re:Klingon in more useful (1)

frisket (149522) | about 5 months ago | (#47250809)

The lack (or not) of speakers isn't the reason. According to one of my moles, the official dead-pan response to the question why Klingon and Elvish aren't in Unicode is that they are not human languages :-)

Re:Klingon in more useful (0)

Anonymous Coward | about 5 months ago | (#47250907)

Thus we need to develop Multicode.

Re:Klingon in more useful (1)

Electricity Likes Me (1098643) | about 5 months ago | (#47250983)

Isn't unicode already variable-length integer-ish via the UTF-8 standard?

Surely we could implement a version which accommodate an effectively infinite number of character sets.

Re:Klingon in more useful (1)

Kjella (173770) | about 5 months ago | (#47251211)

Isn't unicode already variable-length integer-ish via the UTF-8 standard? Surely we could implement a version which accommodate an effectively infinite number of character sets.

Before they gimped it to match UTF-16 it had ~2^31 combinations, now it has ~2^16. And you could have extended UTF-8 to a full ~2^42 by just continuing the scheme to fill the entire first byte, so space is really of little concern. They probably just don't want to coordinate a million different people who want to add a smiley or their imaginary fantasy language to the standard.

Re:Klingon in more useful (1)

marcansoft (727665) | about 5 months ago | (#47251443)

Not 2^16 (Unicode already has way over 2^16 codepoints assigned). The maximum Unicode codepoint value is 1114111, which is somewhat over 2^20 (and happens to be the highest codepoint encodable in UTF-16).

Re:Klingon in more useful (2)

lithis (5679) | about 5 months ago | (#47251331)

There is already at least one effort to extend Unicode beyond the current maximum of 1.1 million characters: The UCS-X Family of UCS Extensions [ucsx.org] . It defines UCS-G, which supports over two billion characters, UCS-E with over nine quintillion, and UCS-Infinity with no upper bound. They each support 8-, 16-, and 32-bit variable-byte encodings (e.g. UTF-E-32, UTF-Infinity-8). Itâ(TM)s been a while since I read about them, but I believe they are all compatible with UTF- 8, 16, and 32.

Re:Klingon in more useful (1)

Anonymous Coward | about 5 months ago | (#47252323)

It has nothing to do with it being a human language or not. The reason why Klingon pIqaD failed was because nobody in the Klingon community actually uses it for writing texts to each other. A Private Use agreement that is more widely supported than almost any SMP script exists for Klingon pIqaD, but tlhIngan Hol speakers just don't use it. Tengwar and Cirth are still immature proposals, and it is more a lack of initiative within the Tolkeinist community that has had these stalled before being formally developed for encoding.

Re:Klingon in more useful (1)

craigminah (1885846) | about 5 months ago | (#47251493)

Those people who spoke Klingon weren't Klingons...I think the correct term is "nerd" (as opposed to "geek").

Re:Klingon in more useful (1)

narcc (412956) | about 5 months ago | (#47251711)

Wait, what? I was unaware there was a distinction between "nerd" and "geek". Can I get a few nerds to geek out here and argue over their definitions?

Re:Klingon in more useful (1)

relyimah (938927) | about 5 months ago | (#47251779)

This link should help... www.youtube.com/watch?v=2Tvy_Pbe5NA

Re:Klingon in more useful (0)

Anonymous Coward | about 5 months ago | (#47252605)

How do you say "you are a very sad person" in Klingon?

Re:Klingon in more useful (0)

Anonymous Coward | about 5 months ago | (#47253229)

Seriously, there are Klingon speakers. Yeah, they're called "dorks."

Re:Klingon in more useful (1)

grouchomarxist (127479) | about 5 months ago | (#47254501)

Given that Linear A hasn't been deciphered yet, I wonder how they justify putting it in unicode. They don't know for certain which glyphs are distinct characters yet.

Re:Seriously? (1)

rubycodez (864176) | about 5 months ago | (#47250523)

there is no standard from which to make Unicode, fans have made the most popular versions of various klingon alphabets

Re:Seriously? (0)

Anonymous Coward | about 5 months ago | (#47251547)

No standard? It's a robust computer industry standard, with strictly defined character maps. Sure, make forks if you want, but don't call them "Unicode".

Re:Seriously? (0)

Anonymous Coward | about 5 months ago | (#47251759)

No standard? It's a robust computer industry standard, with strictly defined character maps.

Klingon? Is the robust standard defined by ISO, ANSI or an RFC?

Re:Seriously? (0)

Anonymous Coward | about 5 months ago | (#47252377)

No. It is defined by the Unicode Consortium, which cooperates with ISO/IEC JTC1.

On Earth, Klingon is written in Latin (2)

tepples (727027) | about 5 months ago | (#47251219)

First I'll assume that you're talking about the KLI pIqaD for tlhIngan Hol, and not the Skybox pIqaD or the Mandel script. The Unicode team looked at encoding KLI pIqaD but decided against it because the Klingon-speaking community on Earth had already adopted a Latin-based script. (Reference: Klingon alphabets on Wikipedia [wikipedia.org] ) But it could use a slight spelling reform to make it case-insensitive.

Re:Seriously? (1)

NotInHere (3654617) | about 5 months ago | (#47255389)

Still no Klingon?

At least the Vulcan salute [blogspot.de] .

Linear A? (1)

J053 (673094) | about 5 months ago | (#47250379)

I'm sure there are lots of docs in that....

Re:Linear A? (4, Insightful)

Livius (318358) | about 5 months ago | (#47250521)

There are a few, and researchers and historians would like to have them on computer.

Re:Linear A? (1)

K. S. Kyosuke (729550) | about 5 months ago | (#47251713)

But why? We couldn't understand Linear B, and even Michael Ventris found it was all Greek to him. And Linear A seems even more incomprehensible.

Re:Linear A? (1)

kasperd (592156) | about 5 months ago | (#47252731)

But why? We couldn't understand Linear B

That shouldn't be a prerequisite for including it. After all, having the text represented on a computer would be a useful tool in getting to understand it.

Re:Linear A? (1)

K. S. Kyosuke (729550) | about 5 months ago | (#47253231)

Neither should a sense of humor. :-)

Pictographic symbols (2)

toejam13 (958243) | about 5 months ago | (#47250393)

Good. If you do a search of Wingdings on Google, many of the top results are questions on how to use the font with browsers other than IE. Since it isn't a Unicode compliant font, you can't. This update helps correct that problem.

Re:Pictographic symbols (1)

Kjella (173770) | about 5 months ago | (#47251485)

Wingdings is to fonts what VBA/Access is to application development, so I can't say I feel terribly sad about that.

Re:Pictographic symbols (2)

narcc (412956) | about 5 months ago | (#47251781)

Used all-over?

Re:Pictographic symbols (0)

Anonymous Coward | about 5 months ago | (#47252595)

By idiots, yes.

Re:Pictographic symbols (0)

Anonymous Coward | about 5 months ago | (#47252921)

Believe it or not, MSAccess does make a good frontend, although I certainly do prefer open source solutions where possible. You are picturing MSAccess as a backend, which of course is a suicide mission.

Strike out (0)

Anonymous Coward | about 5 months ago | (#47252911)

Wingdings is to fonts what VBA/Access is to application development

You would have been better off with a car analogy. How about this: "Wingdings is to fonts what bananas are to cars."

Re:Strike out (1)

rubycodez (864176) | about 5 months ago | (#47255193)

you've obviously never typed in "bananamobile" in google image search

Why emoji? (2, Insightful)

Anonymous Coward | about 5 months ago | (#47250405)

What's the point of adding pictographic symbols to Unicode? Is this really something we want frozen in time for eternity? What's the benefit of standardizing them anyway?

Wouldn't we be better off standardizing all characters used in written language and be done with it?

Re:Why emoji? (1)

RyuuzakiTetsuya (195424) | about 5 months ago | (#47250495)

ðY'...ðY'ðY'©

Re:Why emoji? (4, Insightful)

RyuuzakiTetsuya (195424) | about 5 months ago | (#47250635)

Not everyone speaks English or Chinese or Spanish.

Everyone recognizes stop sign, airport, pile of poop and other symbols. So communicating via pictographs is actually good. Even if it was incidental.

Re:Why emoji? (3, Informative)

Guy Harris (3803) | about 5 months ago | (#47250691)

Not everyone speaks English or Chinese or Spanish.

Everyone recognizes stop sign, airport, pile of poop and other symbols. So communicating via pictographs is actually good. Even if it was incidental.

And many of them recognize this [emojipedia.org] as well.

Re:Why emoji? (1)

Darinbob (1142669) | about 5 months ago | (#47251071)

But they're not "standard" even if Unicode claims they are. I only heard of emoji within the last year, but there is not central body that dictates exactly what they look like, so that pile of poop symbol will vary depending upon which texting app you use it with. The apps that use emojis are not coordinating with any standard's body or ensuring that the intended meaning is preserved.

Today emojis are purely a fad. We'd think it ridiculous if unicode standardized some of the 80's era desktop icons (so that future generations know what the floppy disk symbol means). Meanwhile there are existing characters that have survived a long test of time which are not yet in unicode.

Shit in one font vs. shit in another font (2)

tepples (727027) | about 5 months ago | (#47251483)

that pile of poop symbol will vary depending upon which texting app you use it with

So will any symbol. Though A, A, and A probably produce distinct glyphs on your machine, you can recognize them all as U+0041 LATIN CAPITAL LETTER A. Likewise, though U+1F4A9 appears different in different fonts, it'll look like shit in all of them.

Re:Why emoji? (5, Interesting)

BitZtream (692029) | about 5 months ago | (#47251693)

But they're not "standard" even if Unicode claims they are.

They are standard in reference to Unicode because the Unicode Consortium defines the Unicode standard. Someone has to be the first to define the standard.

but there is not central body that dictates exactly what they look like, so that pile of poop symbol will vary depending upon which texting app you use it with

Yes, those are called fonts, and in case you haven't noticed, that was true before digital computers with silicon microprocessors even existed and has been true for thousands of years.

The apps that use emojis are not coordinating with any standard's body or ensuring that the intended meaning is preserved.

Apple does, hence why the Messages app already matches the new code points. Google Hangouts seems to work fine as well. Both Messages and Hangouts convert even things like :) into the proper unicode code point and use standard fonts for display. Sure, some half assed apps may not work correctly, but anyone that supports unicode and has fonts will receive them properly already.

Emoji is somewhat silly, but its hardly new, just go ask Japan. Just because you're new to the ballgame doesn't mean its a new ballgame.

Re:Why emoji? (1)

Ark42 (522144) | about 5 months ago | (#47254255)

I think the problem most people think Apple/Emoji has with compatibility is that old versions of Apple stuff used the private-use codepoint areas for emoji, instead of the Unicode standard code points. This has since been fixed, as far as I know, but there are a TON of free Android keyboards that are supposed to type emoji, but only use the old private-use codepoints, and thus don't display anything but a blank space or a square box on Android without some special app to translate and display them.

If you look harder though, you CAN find Android keyboards that have emoji buttons that produce the proper Unicode standard codepoints. The button on the keyboard may be in full color, but the glyph produced with be monochrome. Basically a limit of the direct font rendering, but it will work in every app without any issue then, and Apple people can still see the glyphs you send them via text just fine, etc.

Grammar of pictographs (0)

tepples (727027) | about 5 months ago | (#47251529)

Say you're communicating with pictographs, and you have an action involving two things. Do you put the pictograph for the action before, between, or after the pictographs for the things?

(Spoiler: Speakers of Welsh or Arabic will want to put the action first, while speakers of Japanese or Finnish will want to put it last.)

Re:Grammar of pictographs (0)

BitZtream (692029) | about 5 months ago | (#47251699)

How is that relevant to the discussion of unicode code points? Unicode doesn't define how you conjugate the verb either.

Re:Grammar of pictographs (1)

tepples (727027) | about 5 months ago | (#47253665)

I intended to ask to what extent RyuuzakiTetsuya's concept of "communicating via pictographs" (plural) is practical.

Re:Grammar of pictographs (1)

RyuuzakiTetsuya (195424) | about 5 months ago | (#47261285)

Emoji was an accidental feature for ntt docomo phones.

That being said, if I don't understand Portuguese and you don't understand Korean, I message you a stop sign, that's straightforward to understand.

Re:Grammar of pictographs (0)

Anonymous Coward | about 5 months ago | (#47270055)

I don't think it would be that big of an issue. Nobody who saw Star Wars had any problem grokking what Yoda meant. And, as a speaker (second language) of Japanese, it is interesting to note that, while Japanese grammar itself uses a subject-object-verb word order ("The boy/the ball/kicks"), the language also has a lot of vocabulary that derives from Chinese, which uses more or less the same word order as English. For example, the Japanese word for "sterilize" (as in an autoclave) is "sakkin," which is literally "kill-germ," in that order, whereas an actual Japanese sentence would have the order reversed: "Kin o korosu" (the "o" is a grammatical particle, and "korosu" is the Japanese word that corresponds to the Chinese-derived morpheme "sa-" for "kill").

As long as you're conveying simple ideas, people seem pretty good at guessing the intended meaning even if you muck around with the word order. Sure, ambiguities aren't impossible (you could imagine having a word meaning "contains germs that kill [you]"), but context generally clarifies.

Re:Grammar of pictographs (1)

rossdee (243626) | about 5 months ago | (#47251869)

"Speakers of Welsh or Arabic will want to put the action first, while speakers of Japanese or Finnish will want to put it last"

Along with speakers of hsiloP

Re:Why emoji? (0)

Anonymous Coward | about 5 months ago | (#47251953)

What's the point of adding pictographic symbols to Unicode?

Hear here.

When I'm sorting text, it's important to know how individual symbols relate -- is A before or after $? -- but I don't want to need to give a flying fart whether A comes before or after winky-smile and whether that comes before or after steaming turd. (No, really, there is a steaming turd character [fileformat.info] .)

Re:Why emoji? (0)

Anonymous Coward | about 5 months ago | (#47252739)

streaming turd makes sense, but why is there a Moon viewing ceremony [fileformat.info] character. I can't think of any good reason for that.

Re:Why emoji? (1)

Goaway (82658) | about 5 months ago | (#47254129)

Round-trip compatibility with other encodings that already have them.

Re:Why emoji? (1)

idji (984038) | about 5 months ago | (#47261019)

If the emoji are standardized in Unicode, then it will be easier for any kind of software to support them.

23 new scripts, 2834 new characters... (0)

Anonymous Coward | about 5 months ago | (#47250437)

And it's still a big 💩

Re:23 new scripts, 2834 new characters... (0)

Anonymous Coward | about 5 months ago | (#47252571)

+U1F595 you.

Why no Runes? (0)

Anonymous Coward | about 5 months ago | (#47250535)

It's decipherable. :)

Why no Runes? (0)

Anonymous Coward | about 5 months ago | (#47250799)

http://en.wikipedia.org/wiki/Runes#Unicode

Han unification (0)

Anonymous Coward | about 5 months ago | (#47251029)

And yet they still refuse to recognize that Chinese and Japanese are different languages.

Latin unification too (2)

tepples (727027) | about 5 months ago | (#47251263)

True, some characters have forms that differ between traditional Chinese and Japanese. But that's not limited to Chinese and Japanese, as Unicode also has Latin unification. For example, the letter "i" is the same whether in English or Turkish, but its capital form differs between the two languages. And in Dutch, the letter 'y' with umlaut/diaeresis is supposed to be written using the rounded form, as it's considered a ligature of "ij". Implementations are supposed to define out-of-band language markers, such as HTML's lang= attribute, to handle this.

Re:Latin unification too (2)

AmiMoJo (196126) | about 5 months ago | (#47252637)

The problem with unification is that metadata is often either unavailable or inadequate. The goal should be to represent all characters in plain text, not rely on specific document formats to provide context.

How would a music player app handle a file tagged with a unified character? How would a file manager handle it? There is no context, no metadata to tell it what language is in use and what font to select. Anyone who uses both Japanese and Chinese can tell you this is a common problem, and I imagine Dutch people get it too.

Even in HTML you only get to set one language for the entire document. Good luck writing a page in Chinese about learning Japanese. The ones I have seen tend to use GIFs to represent the characters that Unicode can't differentiate, but that means you can't copy/paste them and the fonts don't match.

Re:Latin unification too (1)

hackertourist (2202674) | about 5 months ago | (#47252773)

I imagine Dutch people get it too.

I'm Dutch. I've seen the y-dieresis just about 0 times. The ij ligature is very rare as well. Everyone just uses the non-ligatured ij (i.e. two characters).

Re:Latin unification too (1)

draconx (1643235) | about 5 months ago | (#47253959)

The problem with unification is that metadata is often either unavailable or inadequate. The goal should be to represent all characters in plain text, not rely on specific document formats to provide context.

How would a music player app handle a file tagged with a unified character? How would a file manager handle it? There is no context, no metadata to tell it what language is in use and what font to select.

Older Unicode standards included control sequences which could be inserted in plain text to indicate language. This is still supported in some applications to influence font choice. However, the feature was removed in favour of external markup, probably because it was really hard to edit (most text editors don't really handle non-printing characters very well.)

Even in HTML you only get to set one language for the entire document.

This is simply incorrect. Language in HTML can be set on any element.

Re:Latin unification too (1)

laie_techie (883464) | about 5 months ago | (#47254957)

>

Even in HTML you only get to set one language for the entire document. Good luck writing a page in Chinese about learning Japanese. The ones I have seen tend to use GIFs to represent the characters that Unicode can't differentiate, but that means you can't copy/paste them and the fonts don't match.

Most elements in HTML accept the lang attribute. Please refer to the W3C [w3.org]

Re:Han unification (1)

Ark42 (522144) | about 5 months ago | (#47254339)

I ran into this problem recently. The kanji for "leader" is supposed to be like the diagram at: http://jisho.org/kanji/details... [jisho.org] (note the 4 individual lines for the top right piece) but the fonts on my Android phone insisted on rendering this glyph using the Chinese font, that looks like http://www.hantrainerpro.com/h... [hantrainerpro.com]
It's not just drawn differently, it's actually one less stroke in Chinese, but it's supposed to be the same glyph somehow!
Unicode has no way to indicate which language you actually want characters like this to display in. Sure for single-language documents like HTML, you can use a lang= attribute and hope the browser handles it right, but you certainly can't mix the two together very easily.

Re:Han unification (1)

draconx (1643235) | about 5 months ago | (#47257681)

Sure for single-language documents like HTML, you can use a lang= attribute and hope the browser handles it right, but you certainly can't mix the two together very easily.

Pretty easy to mix them in HTML. For example:

<p lang='en'>The Japanese version is '<span lang='ja'>&#x5c06;</span>' and the Simplified Chinese version is '<span lang='zh-Hans'>&#x5c06;</span>'.</p>

My browser displays the appropriate glyph in each instance.

Re:Han unification (1)

Ark42 (522144) | about 5 months ago | (#47262189)

And this highlights an incredibly deep flaw in Unicode... plus, unfortunately, the app I was using on Android wasn't rendering with HTML, so I was basically out of luck there.

Peso vs. Dollar (2)

steelfood (895457) | about 5 months ago | (#47251237)

It's great they're adding new currency symbols for new currencies, but there's still a long-standing issue of the $ with one bar and $ with two bars. It's currently still considered a stylistic difference, but the scope of Unicode has evolved to account for every glyph known to man. Certainly, one- and two-bar $ can hardly be said to be the same glyph within this new context.

Especially considering that there are already stylistic duplicates (half-width and full-width latin forms vs. plain latin), I can't seem to understand the justification behind letting one- and two-bar $, which are historically separate glyphs, be underrepresented.

Re:Peso vs. Dollar (0)

Anonymous Coward | about 5 months ago | (#47251335)

incorrect! unicode contains *no glpyhs*, only *codepoints*. there's a
reason for the linguistic tapdance. they are trying very hard to avoid
encoding font information into unicode.

the *-width latin codepoints are bizarre, and it's too bad they were
included. but it is perhaps best not to double-down on a mistake.

Re:Peso vs. Dollar (4, Informative)

lithis (5679) | about 5 months ago | (#47251403)

Many of the stylistic duplicates, for example the half-width and full-width latin forms that you mentioned, are only in Unicode because of backwards compatibility with pre-Unicode character sets. If there hadn't been character sets that had different encodings for half- and full-width forms, Unicode never would have had them either. So you can't use them to argue for more glyph variations in Unicode. The same applies to many of the formatted numbers, such as the Unicode characters "VII" (U+2166), "7." (U+248E), "(7)" (U+247A), and "1/7" (U+2150), and units of measure ("cm^2", U+33A0).

(Oh, for Unicode support in Slashdot....)

Re:Peso vs. Dollar (1)

steelfood (895457) | about 5 months ago | (#47257145)

My argument isn't that the one- and two-bar $ are variations that deserve two code points, but that they are inideed separate glyphs that deserve separate code points. There's historical as well as current cultural precedent for this. For Unicode to aspire to represent all written symbols (especially now that it's taken on emoji), this treatment of the two different $ continues to baffle me.

My point about the half- and full-width glyph variations are that they exist. I just find it odd that a character with what I think is a stronger case for a separate code point is completely marginalized.

Re:Peso vs. Dollar (1)

91degrees (207121) | about 5 months ago | (#47252383)

This is strange. The UK Pound is U+00A3 and the Italian Lira is U+20A4. While the latter has two lines across, two lines is acceptable for the pound and a single line was acceptable for the Lira.

(Not that anyone stil uses the Italian Lira but other countries use the symbol and people may still write about it)

Re:Peso vs. Dollar (0)

Anonymous Coward | about 5 months ago | (#47252553)

The same symbol, Lira translates as pound, the UK pound and its symbol is taken from a Roman Lira

Re:Peso vs. Dollar (1)

TangoMargarine (1617195) | about 5 months ago | (#47254549)

And people wonder why Unicode is so hard to do...we can't even keep them straight in real life.

The irony (1)

Anonymous Coward | about 5 months ago | (#47251249)

Slashdot celebrates new version of Unicode...

2,834 glyphs, not characters (0)

Anonymous Coward | about 5 months ago | (#47251409)

A character is like the letter 'A', which is represented once for each language table that has a letter 'A'.

Proprietary fonts (5, Insightful)

ortholattice (175065) | about 5 months ago | (#47251909)

Over the years, I've tried to use Unicode for math symbols on various web pages and tend to revert back to GIFs or LaTeX-generating tools due to problems with symbols missing from the font used by this or that browser/OS combination, or even incorrect symbols in some cases.

IMO the biggest problem with Unicode is the lack of a public domain reference font. Instead, it is a mishmash of proprietary fonts each of which only partly implements the spec. Even the Unicode spec itself uses proprietary fonts from various sources and thus cannot be freely reproduced (it says so right in the spec), a terrible idea for a supposed "standard".

I'd love to see a plain, unadorned public-domain reference font that incorporates all defined characters - indeed, it would seem to me to be the responsibility of the Unicode Standard committee to provide such a font. Then others can use it as a basis for their own fancy proprietary font variations, and I would have a reliable font I could revert to when necessary.

Re:Proprietary fonts (1)

SEE (7681) | about 5 months ago | (#47252003)

Why do you think an official Unicode font would solve your mathematical symbol problem any more than the already-available STIX [stixfonts.org] has failed to?

Re:Proprietary fonts (1)

Swistak (899225) | about 5 months ago | (#47252457)

Probably becouse then when startin new font design you just fork reference fotn and replace all glyphs you want/have to. Your font will display your glyphs in places you care about and use standard glyphs for ones you didn't implement?

Re:Proprietary fonts (1)

StripedCow (776465) | about 5 months ago | (#47252713)

The problem is not with Unicode. Don't blame the character set, blame the font-specification, the software, and copyrights (!)

In my view, every font that does not specify all unicode characters should point to one or more fall-back fonts, and the search should proceed recursively. Eventually, there should be a default "unicode" font implementing all characters.

Also, fonts should not be copyrightable, because that adds greatly to the whole mess.

Re:Proprietary fonts (0)

Anonymous Coward | about 5 months ago | (#47252857)

GNU Unifont is pretty complete. Sure, it's ugly and monospaced, but it's a freely-available font.

Re:Proprietary fonts (1)

Sarlok (144969) | about 5 months ago | (#47254477)

These folks [sil.org] have several open fonts that cover some lesser-used code points. They don't have a big font with everything, but the Doulos [sil.org] font has pretty good coverage for Latin and Cyrillic scripts.

Re:Proprietary fonts (1)

juancnuno (946732) | about 5 months ago | (#47255605)

I agree that it's a problem but I don't think it's Unicode's. I don't think the consortium has set out to do anything but encode characters (and I think they're doing a good job). I imagine that coming up with a font for all those characters would be another massive undertaking.

And as much as I champion free software I would have no problem with a company stepping in and filling that need by selling such a font.

Middle finger (0)

Anonymous Coward | about 5 months ago | (#47252301)

I read somewhere they are having the middle finger emoji:
emojipedia.org/reversed-hand-with-middle-finger-extended/
Real or not?

Re:Middle finger (1)

DaphneDiane (72889) | about 5 months ago | (#47252393)

I believe you are referring to U1F595 [unicode.org] .

Re:Middle finger (1)

rubycodez (864176) | about 5 months ago | (#47255223)

ok, so we have a character for fuck you, but none for fucking? most of us wouldn't be here but for the fucking.

Emoji? (2)

bradley13 (1118935) | about 5 months ago | (#47252591)

Great, Unicode is already a fragmented mess, and now the standards organization justifies its existence by adding characters that do not exist.

An earlier poster asked why anyone thinks Unicode is fragmented. The answer in one word: fonts. Different fonts support different subsets of Unicode, because the whole thing is just too big. If you expect your font to mostly be used in Europe, you are unlikely to bother with Asian characters. if you have an Asian font, it probably has only English characters, not the rest of Europe. huge. If you have a font with complete mathematical symbols, it will include the Greek alphabet, but actual language support is a crapshoot.

So the solution to this problem is to add made-up characters that no one cares about. "Man in business suit, levitating". Really?

Re:Emoji? (0)

Anonymous Coward | about 5 months ago | (#47252641)

This is not what 'fragmented' means. And any decent GUI toolkit will look for other fonts on the system to fill in glyphs not present in the font it's currently using.

What's /your/ solution to this problem? Forbid people to write in Chinese?

Re:Emoji? (1)

laie_techie (883464) | about 5 months ago | (#47255101)

Great, Unicode is already a fragmented mess, and now the standards organization justifies its existence by adding characters that do not exist.

An earlier poster asked why anyone thinks Unicode is fragmented. The answer in one word: fonts. Different fonts support different subsets of Unicode, because the whole thing is just too big. If you expect your font to mostly be used in Europe, you are unlikely to bother with Asian characters. if you have an Asian font, it probably has only English characters, not the rest of Europe. huge. If you have a font with complete mathematical symbols, it will include the Greek alphabet, but actual language support is a crapshoot.

You are correct in the reason that most fonts only contain a subset of Unicode code points. There are thousands of code points. Most documents will only use a small subset. Why should I have to have all those Chinese or Arabic characters when I only write in English, Spanish, Portuguese, and Hawaiian? People who read and write Hawaiian have fonts which support the Hawaiian letters `okina and kahako. Chinese have fonts which support the Chinese glyphs.

As for language support, that isn't a font's problem. It's up to the writer to know how to intelligently combine glyphs into words, and words into coherent thoughts.

Re:Emoji? (1)

rubycodez (864176) | about 5 months ago | (#47255241)

your assertion the characters don't exist is provably false. chat software produces them, hundreds of millions of people use them

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?