Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

ECMAScript Version 5 Approved

timothy posted more than 4 years ago | from the javascript-by-any-other-name dept.

Programming 158

systembug writes "After 10 years of waiting and some infighting, ECMAScript version 5 is finally out, approved by 19 of the 21 members of the ECMA Technical Committee 39. JSON is in; Intel and IBM dissented. IBM is obviously in disagreement with the decision against IEEE 754r, a floating point format for correct, but slow representation of decimal numbers, despite pleas by Yahoo's Douglas Crockford." (About 754r, Crockford says "It was rejected by ES4 and by ES3.1 — it was one of the few things that we could agree on. We all agreed that the IBM proposal should not go in.")

Sorry! There are no comments related to the filter you selected.

FP (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#30365072)

Frost piss

use fixed point instead (4, Insightful)

StripedCow (776465) | more than 4 years ago | (#30365168)

instead of using floating point for representing decimal numbers, one can of coarse easily use fixed point... for currency computations, just store every value multiplied by 100 and use some fancy printing routine to put the decimal point at the right position.

and if you're afraid that you might mix up floating point and fixed point numbers, just define a special type for the fixed-point numbers, and define corresponding overloaded operators... oh wait

Re:use fixed point instead (1)

swimboy (30943) | more than 4 years ago | (#30365494)

instead of using floating point for representing decimal numbers, one can of coarse easily use fixed point... for currency computations, just store every value multiplied by 100 and use some fancy printing routine to put the decimal point at the right position.

and if you're afraid that you might mix up floating point and fixed point numbers, just define a special type for the fixed-point numbers, and define corresponding overloaded operators... oh wait

Do you work for these guys? [slashdot.org]

Re:use fixed point instead (2, Funny)

StripedCow (776465) | more than 4 years ago | (#30365736)

Do you work for these guys?

No, but they can hire me... I'm already looking forward to the amount they'll put on my paycheck...

Re:use fixed point instead (1)

antifoidulus (807088) | more than 4 years ago | (#30365584)

Hah, if you hadn't made that mundane mistake in your little scheme it would have worked perfectly, but Lumberg is on to you, you are going to federal pound-me-in-the-ass prison!

Re:use fixed point instead (2, Insightful)

shutdown -p now (807394) | more than 4 years ago | (#30367260)

instead of using floating point for representing decimal numbers, one can of coarse easily use fixed point... for currency computations, just store every value multiplied by 100 and use some fancy printing routine to put the decimal point at the right position.

There are several problems here.

First of all, quite obviously, having to do this manually is rather inefficient. I do not know if it's a big deal for JS (how much of JS code out there involves monetary calculations), but for languages which are mostly used for business applications, you really want something where you can write (a+b*c).

If you provide a premade type or library class for fixed point, then two decimal places after the point isn't enough - some currencies in the world subdivide into 10,000 subunits. So you need at least four.

Finally - and perhaps most importantly - while 4 places is enough to store any such value, it's not enough to do arithmetics on it, because you'll get relatively large rounding errors (basically you may start missing local cents after two operations already).

All in all, decimal floating-point arithmetics just makes more sense. It's also more generally useful than fixed-point (it's not just money where you want decimal).

Re:use fixed point instead (0)

Anonymous Coward | more than 4 years ago | (#30367302)

No, you have to define all these functions in advance and then there is no problem.

Re:use fixed point instead (1, Insightful)

Anonymous Coward | more than 4 years ago | (#30368028)

instead of using floating point for representing decimal numbers, one can of coarse easily use fixed point... for currency computations, just store every value multiplied by 100 and use some fancy printing routine to put the decimal point at the right position.

There are several problems here.

First of all, quite obviously, having to do this manually is rather inefficient. I do not know if it's a big deal for JS (how much of JS code out there involves monetary calculations), but for languages which are mostly used for business applications, you really want something where you can write (a+b*c).

It's mostly input and output routines that need to monkey with the decimal point. You can still write (a+b*c) when you are dealing with pennies or cents.

If you provide a premade type or library class for fixed point, then two decimal places after the point isn't enough - some currencies in the world subdivide into 10,000 subunits. So you need at least four.

You would always use the smallest subdivision of a currency as the unit for calculations. For the US dollar you store everything as cents, for the Tunisian dinar you store everything as milims.

Finally - and perhaps most importantly - while 4 places is enough to store any such value, it's not enough to do arithmetics on it, because you'll get relatively large rounding errors (basically you may start missing local cents after two operations already).

You could use some other fixed point arithmatic. One of the linked articles was talking about using "9E6" were the numbers are 64 bits scaled by 9 million. That sounds a bit strange, but gives you a fare number of places after the decimal point, and lets you store a lot of common fractions (1/3, 1/360, etc.) exactly.

Either that, or use floating point but make the unit the smallest subdivision of the currency in question. You lose a bit off the top end (so your national deficit can only go up to 10^306 dollars instead of 10^308 or whatever) but you can store exact representations for whole numbers of cents/pennies/milims.

Re:use fixed point instead (1)

molecular (311632) | more than 4 years ago | (#30367874)

instead of using floating point for representing decimal numbers, one can of coarse easily use fixed point... for currency computations, just store every value multiplied by 100 and use some fancy printing routine to put the decimal point at the right position.

and let all hell break loose when someone forgets to multiply / divide by 100, like happened here: http://entertainment.slashdot.org/story/09/11/25/1448218/Moving-Decimal-Bug-Loses-Money [slashdot.org]

Re:use fixed point instead (1)

Yaa 101 (664725) | more than 4 years ago | (#30368548)

uhm, make that times 10.000, one needs 4 decimal places for correct rounding etc.

Re:use fixed point instead (1)

DragonWriter (970822) | more than 4 years ago | (#30368792)

instead of using floating point for representing decimal numbers, one can of coarse easily use fixed point... for currency computations, just store every value multiplied by 100 and use some fancy printing routine to put the decimal point at the right position.

Of course, you still have to use floating point operations (or horrible, case-by-case workarounds) if you are doing anything other than addition and subtraction, and multiplication by integer values, even if the representation you use for the inputs and results is fixed-point, and you'll have to convert the results of the floating point operations to and from your fixed-point representation. When you don't have very specific rounding rules for the operations, this isn't problematic, but the more specific the rules you have, the more cumbersome this will be, particularly if the rules are based on base-10 representation (as the often are in real world uses) and the floating point representation is base-2.

If you have use a base-10 floating point system (representation and operations) with good support for rounding the ways you need to start with, you can express the operations you want performed on the numbers more directly in your code.

Re:use fixed point instead (1)

angel'o'sphere (80593) | more than 4 years ago | (#30369470)

So, are you sure you really understand the difference between a floating point and a fixed point numbers?
Would you care to point out lets say 3 of the advantages of each and 3 of the disadvantages of each? Feel free to intermix math dis-/advantages with implementation dis-/advantages ...

Man oh man, if you ever had implemented a fixed point math lib you would know that it lacks dozens of features a floating point implementation gives you (for a trade off, ofc).

angel'o'sphere

Re:use fixed point instead (1)

DamonHD (794830) | more than 4 years ago | (#30369758)

Not all currencies have two digits after the decimal point.

Rgds

Damon

JSON is in!? (1, Insightful)

Anonymous Coward | more than 4 years ago | (#30365208)

JSON was *always* in Ecmascript. It's just object literal notation, a shorthand way of instantiating an object with properties.

Re:JSON is in!? (4, Insightful)

maxume (22995) | more than 4 years ago | (#30365482)

Implementations are expected to provide a safe(r) parser than eval.

Re:JSON is in!? (1)

SharpFang (651121) | more than 4 years ago | (#30365806)

and possibly a creator (serialize object as JSON string) as well?

Re:JSON is in!? (1)

aztracker1 (702135) | more than 4 years ago | (#30366488)

see json.org [json.org] ... it's pretty good already. Though having an in-browser JSON.stringify and parse will probably operate better, and safer. I think that adding the toISOString to dates, and including that format in the parser would probably be a nice built-in add, but there are examples in the json.org parser/encoder already.

Re:JSON is in!? (1)

Sancho (17056) | more than 4 years ago | (#30366094)

Is that how JSON took hold? By being easy to parse using eval?

Man, that really stinks. If they'd bothered to care about security in the first place, they could have just used XML instead of inventing a new serialized object format.

Re:JSON is in!? (4, Informative)

amicusNYCL (1538833) | more than 4 years ago | (#30366318)

Is that how JSON took hold? By being easy to parse using eval?

No, it took hold because it's Javascript's native object notation. And, as you can imagine, if you have a string of code you use eval to convert it. There are several JSON parsers which do some validation before using eval to ensure that it contains only an object definition and no statements. It would be nice if that validation was standardized and built-in.

they could have just used XML instead

Developers have a choice between XML and JSON (XML is already well supported), but many developers choose JSON instead of XML. Among other things, a JSON structure is typically smaller than a comparable XML structure, and when it's decoded you don't need to use anything special to use it.

instead of inventing a new serialized object format.

They didn't really invent this so much as realize that the native object format can easily be used to transfer arrays and objects between languages. It's very easy to create an associative array in PHP, encode it and send it to Javascript, and end up with virtually the exact same data structure in Javascript. Working with an associative array in PHP (or Javascript) is obviously a lot easier than working with an XML structure. Virtually any language you would use on a server has support for associative arrays or generic objects, so it makes a lot of sense to pass those structures around in a way where you lose no meaning and each language natively supports it.

Re:JSON is in!? (1)

Sancho (17056) | more than 4 years ago | (#30366398)

Interesting. Thanks for the explanation.

Re:JSON is in!? (0)

Anonymous Coward | more than 4 years ago | (#30367622)

JSONP is also just about the only way to grab data from non-same origin sources, making it the standard for mashup type content.

Re:JSON is in!? (1)

asdf7890 (1518587) | more than 4 years ago | (#30366408)

Is that how JSON took hold? By being easy to parse using eval?

It is also easy to not-bother-parsing - just send it down with wrapper code if you are calling a script or file that returns JS code (var somevar = <JSON_stuff>;).

Potentially as nasty as eval() for exactly the same reasons, so much sanity checking is still required if the JSON you send is coming (entirely or partially) the user or persistent storage like the DB.

It is a handy format for defining whole structures in their initial state in code - it can be much more concise, and easier to read afterwards, then a pile of code creating arrays/objects and setting values/properties.

Re:JSON is in!? (1)

maxume (22995) | more than 4 years ago | (#30366410)

I don't think that was a primary reason (safe parsers are, and have been, widely available), I worded my comment in that way because I didn't want to argue about whether or not eval qualified as a json parser.

Re:JSON is in!? (1)

SharpFang (651121) | more than 4 years ago | (#30368280)

JSON is a pretty limited syntax so filtering off anything dangerous (by whitelisting legal structures) is quite easy before you launch eval() on it.

Re:JSON is in!? (1)

SharpFang (651121) | more than 4 years ago | (#30365714)

A parser of superset of JSON was in Ecmascript. With no direct, simple or easy way to limit it to the JSON set. This is mostly OK (if dangerous) if Ecmascript is on the receiving end.

OTOH in order to generate/send JSON, things get more complicated. The usual communication is asymmetric: client->server: HTTP GET/POST, server->client: JSON. Now it would be possible to keep a symmetric connection. And accepting JSON as a standard will protect from a lot of script insertion vulnerabilities, when "JSON" could contain code appended after the closing brace.

I can guess why IBM was pushing for IEEE 754r (4, Interesting)

tjwhaynes (114792) | more than 4 years ago | (#30365214)

The debate over floating point numbers in ECMAScript is interesting. IEEE 754 has plenty of pitfalls for the unwary but it has one big advantage - it is directly supported by the Intel-compatible hardware that 99+% of desktop users are running. Switching to the IEEE 754r in ECMA Script would have meant a speed hit to the language on the Intel platform until Intel supports it in hardware. This is an area where IBM already has a hardware implementation of IEEE 754r - its available on the POWER6 platform and I believe that the z-Series also has a hardware implementation. I suspect that IBM will continue to push for IEEE 754r in ECMAScript, I wonder whether Intel is considering adding IEEE 754r support to its processors in the future.

Disclaimer: I have no contact with the IBM ECMAScript folks.

Cheers,
Toby Haynes

Re:I can guess why IBM was pushing for IEEE 754r (1)

TrancePhreak (576593) | more than 4 years ago | (#30365424)

There's also the mobile realm, where I don't think IBM has even stepped foot in. Not adopting IEEE 754r at this time seems like the right thing to do.

Re:I can guess why IBM was pushing for IEEE 754r (2, Interesting)

H0p313ss (811249) | more than 4 years ago | (#30365592)

There's also the mobile realm, where I don't think IBM has even stepped foot in.

The IBM JVM [ibm.com] is used in mobiles. Lenovo (part owned by IBM) has/had a cellphone division [chinadaily.com.cn] .

Re:I can guess why IBM was pushing for IEEE 754r (2, Informative)

TrancePhreak (576593) | more than 4 years ago | (#30365858)

It doesn't seem like any of the ARM based procs support IEEE 754r.

Re:I can guess why IBM was pushing for IEEE 754r (1)

dkf (304284) | more than 4 years ago | (#30366462)

It doesn't seem like any of the ARM based procs support IEEE 754r.

For a long time, they didn't support straight IEEE 754 either, and did all float handling in software. (Don't know if that's changed. At the time I was involved in circuit-level simulation of ARM CPUs, but that was ages ago.)

Re:I can guess why IBM was pushing for IEEE 754r (1)

TrancePhreak (576593) | more than 4 years ago | (#30366526)

Some of the Cortex ones support 754, but not 754r. It's optional component, however.

Re:I can guess why IBM was pushing for IEEE 754r (5, Insightful)

teg (97890) | more than 4 years ago | (#30365536)

IEEE 754 has plenty of pitfalls for the unwary but it has one big advantage - it is directly supported by the Intel-compatible hardware that 99+% of desktop users are running. Switching to the IEEE 754r in ECMA Script would have meant a speed hit to the language on the Intel platform until Intel supports it in hardware. This is an area where IBM already has a hardware implementation of IEEE 754r - its available on the POWER6 platform and I believe that the z-Series also has a hardware implementation.

ECMAScript is client side, so I don't think that was the issue. Z-series is server only, and POWER6 is almost all servers - and for POWER workstations, the ability to run javascript a little bit faster has almost zero value. The more likely explanation is that IBM has its roots in business, and puts more importance into correct decimal handling than companies with their roots in other areas where this didn't matter much.

Re:I can guess why IBM was pushing for IEEE 754r (3, Informative)

BZ (40346) | more than 4 years ago | (#30366770)

> ECMAScript is client side,

You may be interested in http://en.wikipedia.org/wiki/Server-side_JavaScript [wikipedia.org]

I agree that user-visible ECMAScript is client-side, but user-visible _everything_ is client-side, really.

Re:I can guess why IBM was pushing for IEEE 754r (0)

Anonymous Coward | more than 4 years ago | (#30367012)

IEEE 754 has plenty of pitfalls for the unwary but it has one big advantage - it is directly supported by the Intel-compatible hardware that 99+% of desktop users are running. Switching to the IEEE 754r in ECMA Script would have meant a speed hit to the language on the Intel platform until Intel supports it in hardware. This is an area where IBM already has a hardware implementation of IEEE 754r - its available on the POWER6 platform and I believe that the z-Series also has a hardware implementation.

ECMAScript is client side, so I don't think that was the issue. Z-series is server only, and POWER6 is almost all servers - and for POWER workstations, the ability to run javascript a little bit faster has almost zero value. The more likely explanation is that IBM has its roots in business, and puts more importance into correct decimal handling than companies with their roots in other areas where this didn't matter much.

Since IBM has hardware IEEE 754r support and Intel does not, IBM likely has patents on one or more critical parts of the hardware implementation. If Intel were to implement 754r on their chips, it will likely get them sued, regardless of whether or not their engineers even looked at the patents, since there may only be one reasonable way to solve the problem. So IBM wants to drive 754r to everyone's chips and make a bunch of money in licenses and/or lawsuits.

Pretty standard business practice in the chip business. Just look at the history between Intel and AMD.

Re:I can guess why IBM was pushing for IEEE 754r (1)

Lord Lode (1290856) | more than 4 years ago | (#30365610)

Why do processors need decimal number support? 10 is just an arbitrary number humans picked because they happen to have 10 fingers. There's no connection between that and computers.

Re:I can guess why IBM was pushing for IEEE 754r (2, Insightful)

csnydermvpsoft (596111) | more than 4 years ago | (#30365664)

Why do processors need decimal number support? 10 is just an arbitrary number humans picked because they happen to have 10 fingers. There's no connection between that and computers.

I don't know about you, but I'd prefer that computers adapt to the way I think rather than vice-versa.

Re:I can guess why IBM was pushing for IEEE 754r (4, Informative)

tjwhaynes (114792) | more than 4 years ago | (#30365704)

Why do processors need decimal number support? 10 is just an arbitrary number humans picked because they happen to have 10 fingers. There's no connection between that and computers.

Clearly you've never dealt with an irate customer who has spent $$$ on your software product, has created a table using "REAL" (4-byte floating point) types and then wonders why the sums are screwing up. IEEE754 can't accurately represent most fractions in the way that humans do and this means that computers using IEEE 754 floating point give different answers to a human sitting down with pen and pencil and doing the same sums. As humans are often the consumer of the information that the computer spits out, making computers produce the correct results is important.

There are plenty of infinite precision computing libraries out there for software developers to use. However, they are all a lot slower than the 4, 8 or 10 byte floating point IEEE 754 calculations which are supported directly by the hardware. Implementing the IEEE 754r calculations directly on the CPU means that you can get close to the same performance levels. I'm guessing that at best, 128 bit IEEE 754r performs about half the speed of 64bit IEEE 754, purely because of the data width.

Cheers,
Toby Haynes

Re:I can guess why IBM was pushing for IEEE 754r (1)

Lord Lode (1290856) | more than 4 years ago | (#30365870)

I see it now! I wish I could mod this reply up.

Re:I can guess why IBM was pushing for IEEE 754r (2, Informative)

systembug (172239) | more than 4 years ago | (#30365928)

(...) I'm guessing that at best, 128 bit IEEE 754r performs about half the speed of 64bit IEEE 754, purely because of the data width.

According to Douglas Crockford "...it's literally hundreds of times slower than the current format.".

Re:I can guess why IBM was pushing for IEEE 754r (4, Informative)

tjwhaynes (114792) | more than 4 years ago | (#30366200)

(...) I'm guessing that at best, 128 bit IEEE 754r performs about half the speed of 64bit IEEE 754, purely because of the data width.

According to Douglas Crockford "...it's literally hundreds of times slower than the current format.".

I don't doubt that the software implementations are "hundreds of times slower". I've had my hands deep into several implementations of decimal arithmetic and none of them are even remotely close to IEEE 754 in hardware. IEEE 754r is better than some of the predecessors because a software implementation can map the internal representation to integer maths. However, IEEE 754r does exist in hardware and I was guessing that the hardware IEEE 754r is still half the speed of hardware IEEE 754.

One other thing that IEEE 754 has going for it is the emerging GPU-as-co-processor field. The latest GPUs can do full 64bit IEEE 754 in the stream processors, making massive parallel floating point processing incredibly speedy.

Cheers,
Toby Haynes

Re:I can guess why IBM was pushing for IEEE 754r (1)

systembug (172239) | more than 4 years ago | (#30366402)

One should not forget that this time, Intel processors are not the only ones to be considerered. With the rise of the iPhone, RIM, and others, javascript performance on ARM is a serious issue.

Re:I can guess why IBM was pushing for IEEE 754r (1)

NNKK (218503) | more than 4 years ago | (#30367890)

If it's implemented right, you shouldn't take much of a performance hit, but the FPU would be more complex, with a _lot_ more transistors.

The only real speed hit should be transferring the larger values to/from RAM, but if we're talking about e.g. x86-64 growing 754r support, then QPI and HyperTransport are so bloody fast I doubt you'd notice much for most applications using floating point.

Re:I can guess why IBM was pushing for IEEE 754r (1)

DragonWriter (970822) | more than 4 years ago | (#30368914)

According to Douglas Crockford "...it's literally hundreds of times slower than the current format.".

Presumably, that's comparing execution speed in environments with hardware support of the old standard, but using software-only for the new standard.

Financial Calculations (5, Insightful)

pavon (30274) | more than 4 years ago | (#30365754)

When you have strict regulatory rules about how rounding must be done, and your numerical system can't even represent 0.2 exactly, then it is most certainly a concern. There are other solutions, such as using base-10 fixed point calculations rather than floating point, but having decimal floating point is certainly more convenient, and having a hardware implementation is much more efficient.

Re:Financial Calculations (0)

Anonymous Coward | more than 4 years ago | (#30366326)

When you have strict regulatory rules about how rounding must be done, and your numerical system can't even represent 0.2 exactly, then ...

.. don't use javascript.

Re:Financial Calculations (3, Insightful)

iluvcapra (782887) | more than 4 years ago | (#30366660)

If you know someone that is using IEEE floats or doubles to represent dollars internally, reach out to them and get them help, and let them know that just counting the pennies and occasionally inserting a decimal for the humans is much, much safer! ;)

Re:Financial Calculations (1)

DragonWriter (970822) | more than 4 years ago | (#30368866)

If you know someone that is using IEEE floats or doubles to represent dollars internally, reach out to them and get them help, and let them know that just counting the pennies and occasionally inserting a decimal for the humans is much, much safer! ;)

Using IEEE decimal floating point is also safer, and involves less conversion and better control of rounding when you have to do operations that can't be done with pure integer math. Like, you know, anything involving percentages.

Financial calculations involve more than just tracking deposits and withdrawals of amounts given in dollars-and-cents or the equivalent.

Re:Financial Calculations (0)

Anonymous Coward | more than 4 years ago | (#30366694)

It's still phenomenally stupid to make JAVASCRIPT of all languages use a slower and less precise (!) number format because somebody may want to do financial calculations and doesn't understand floating point math enough to use fixed point math for that purpose.

Calculations in cents (1)

tepples (727027) | more than 4 years ago | (#30366746)

When you have strict regulatory rules about how rounding must be done, and your numerical system can't even represent 0.2 exactly, then it is most certainly a concern.

The numerical system in use at my employer includes a fixed-point representation of money. All money values are assumed to be fractions with a denominator of exactly 100.

Re:Calculations in cents (3, Funny)

Yvan256 (722131) | more than 4 years ago | (#30366992)

Even easier would be to base all the system on cents. After that it's extremely easy to convert that cents value into a dollar/cents string.

Re:Calculations in cents (1)

spud603 (832173) | more than 4 years ago | (#30367162)

umm... nevermind.

Re:Calculations in cents (1, Funny)

Anonymous Coward | more than 4 years ago | (#30367576)

WHOOSH

Re:Calculations in cents (1)

mpilsbury (513793) | more than 4 years ago | (#30369118)

How would this work if you needed to localise your application to handle some of the Middle Eastern currencies that use 1000 sub-units instead of 100?

Re:Calculations in cents (1)

Bigjeff5 (1143585) | more than 4 years ago | (#30369246)

Assume all values are fractions with a denominator of 1000?

Re:Financial Calculations (1)

MunkieLife (898054) | more than 4 years ago | (#30366782)

Who in their right mind would use javascript for financial calculations that need to be relied on?

Re:Financial Calculations (2, Insightful)

Bigjeff5 (1143585) | more than 4 years ago | (#30369226)

Why not? There is nothing intrinsicly differen't about the way Javascript is executed on a machine than, say C. They both eventually make it to machine language for execution, and any errors are going to be in the compiler (whether JIT or compiled in advance). Limitations in the language and the fact that it is interpreted means there are a lot of things you can do in C that you cannot do in Javascript, but none of that applies to raw calculations. C is just as susceptible to the floating point problem as Javascript, and the methods to avoid that pitfall are identical in Javascript and C. .1 + .2 != .3 in both, the dangers are the same.

The real question you should be asking is, who in their right mind would let a programmer who does not understand the pitfalls of floating point calculations write code for financial calculations that need to be relied on?

Re:Financial Calculations (1)

MunkieLife (898054) | more than 4 years ago | (#30366954)

Doesn't everyone know not to use floating point numbers for financial calculations? Or at least understand the limitations or faults associated with them...

Re:Financial Calculations (1)

DragonWriter (970822) | more than 4 years ago | (#30368560)

Doesn't everyone know not to use floating point numbers for financial calculations? Or at least understand the limitations or faults associated with them...

Financial calculations that involve more than addition, subtraction, and multiplication by integer values -- which there are a lot of -- require the use something other than integer math. But, yeah, most people understand the pitfalls, which is why the newer standard exists to address them.

Yes, in Javascript (2, Insightful)

pavon (30274) | more than 4 years ago | (#30367484)

There is nothing stupid about using javascript for financial calculations. More and more applications are moving to the web, and the more you can put in the client, the more responsive the application will be. Imagine a budgeting app like Quicken on the web, or a simple loan/savings calculator whose parameters can be dynamically adjusted, and a table/graph changes in response. While critical calculations (when actually posting a payment for example) should be (re)done on the server, it would not be good if your "quick-look" rounded differently then the final result.

And no, people should not be using floating point for currency, ever, and fixed-point calcualtions aren't hard. But there is more to it that "just put everything in cents"; for example, you often have to deal with rates that are given as fractions of a cent. A decimal type would make this more convenient.

Finally, I don't know if IBM's proposal is a good one. I haven't looked at it; I was just talking in generalities.

You can't do it server side in PHP either (1)

bigtrike (904535) | more than 4 years ago | (#30368954)

PHP doesn't have a built in fixed point type either, yet many programmers use it for financial calculations and simply round to 2 decimal places when it's time to display. Sure, you can use the bcmath extension (about as easily as writing your own fixed point math code in javascript), but very few people do.

Re:I can guess why IBM was pushing for IEEE 754r (0)

Anonymous Coward | more than 4 years ago | (#30367044)

Why do processors need decimal number support?

One reason really sticks out. Money.
Decimals give easy and *exact* support for currency units and their sub-units expressed in their normal x.yy form. Binary doesn't. 0.10 (for example) cannot be expressed exactly in that form in binary no matter how many decimal places you use.

Yes, you can store these amounts exactly in binary by (e.g.) multiplying by *decimal* 100. But this is not ideal since it introduces an extra step which which may be missed out and cause incorrect results with financial consequences (e.g. sum of different inputs processed through different code paths; one input is not multiplied by 100 and you end up with a plausible looking but incorrect value; and yes, this *should* be caught in testing but not all bugs are detected, and this bug *could not occur* using decimals).

Or you could use a well-tested decimal library (which is standard practice on platforms without native decimal support). Still less efficient and more error prone (wrong library version etc.) than native support.

Re:I can guess why IBM was pushing for IEEE 754r (1)

MightyMartian (840721) | more than 4 years ago | (#30368100)

The last time I really dealt with currency issues was on a customized Accounts Receivable program I wrote. The language was VB6 and the database was MySQL, so we're dealing with two different arithmetic libraries, and for the first few weeks I thought my head was going to pop off, with the database giving one result for summing a loooong list of AR entries, but equivalent summing in the software being out. This became particularly awful when dealing with sales tax issues.

Finally, I did precisely as you say, I turned everything into long integers. It meant writing my own little set of arithmetic abstraction functions, so nothing was ever as easy "x+y=z". It did make debugging more difficult, but at the end of the day, things have to balance, and converting to decimal numbers for display/reporting purposes was the only solution that I could come up with that would reliably give me the same answer regardless of what platforms aspects of the software were running on.

Re:I can guess why IBM was pushing for IEEE 754r (1)

Bigjeff5 (1143585) | more than 4 years ago | (#30369454)

It's a common problem people run into (often without realising it), and the standard way to mitigate it is to always calculate the problem at at least two decimals greater precision than the figures you are working with. So if you are given 1 + 1 = X, you calculate at 1.00 + 1.00 = X. 0.05 becomes 0.0500, etc. You don't remove the precision until after all your calculations are finished, and this virtually eliminates the binary/decimal rounding errors that occur. I could see in some cases it still being an issue, but 99% of the time that will fix it.

This basically just performs the hurdles for accurate binary/decimal conversions for you so you don't have to worry about it. It's still going to be calculated in binary at the hardware level though, and it's the natural difference between binary and decimal that causes the problems, so it is still something to keep in the back of your mind.

Re:I can guess why IBM was pushing for IEEE 754r (2, Insightful)

Junior J. Junior III (192702) | more than 4 years ago | (#30367326)

Why do processors need decimal number support? 10 is just an arbitrary number humans picked because they happen to have 10 fingers. There's no connection between that and computers.

Yes there is; the human. Humans use base-10 quite a bit, and they use computers quite a bit. It therefore makes a great deal of sense for humans to want to be able to use base-10 when they are using computers. In fact, it's not at all surprising.

Re:I can guess why IBM was pushing for IEEE 754r (1)

Bigjeff5 (1143585) | more than 4 years ago | (#30369350)

Plus the fact that binary math is completely useless for almost all humans.

It has different oddities that we aren't used to, and as such they seem crazy. Nobody cares that it eliminates other oddities, we are used to those and understand them.

So in order to be useful, ALL computers must convert ALL calculations that a human will see into Decimal. If you can do that sooner rather than later, all the better for matching up with what humans will expect (like the results of 2/3).

The decimal standard came about for this very reasons - binary calculated figures come out slightly different than decimal or fraction calculations, and it's disconcerting when you work out a fraction with a pen and paper and come out with a "correct" answer that is different than what the computer says. Since you verified it separately, obviously the computer is wrong.

Decimal calculations avoid that pitfall for the most part (a good FP programmer would have written his code to avoid it anyway), and the calculations come out to exactly the same as what a guy with a pen and paper will produce.

Re:I can guess why IBM was pushing for IEEE 754r (1)

DragonWriter (970822) | more than 4 years ago | (#30368252)

Why do processors need decimal number support?

To most efficiently perform tasks that people want computers to do, which more frequently means performing correct math with numbers that have an exact base-10 representation than, say, doing the same with numbers that have an exact base-2 representation.

10 is just an arbitrary number humans picked because they happen to have 10 fingers. There's no connection between that and computers.

Well, unless you consider that the users of computers are generally humans, and that the uses of computers are in applications defined by humans to fit human needs, which very often involves consuming base-10 input and producting base-10 output, using either operations where conversion to a base-2 approximation of the given base-10 values for internal processing and back out may produce incorrect, or at least unnecessarily imprecise, results.

Re:I can guess why IBM was pushing for IEEE 754r (1)

angel'o'sphere (80593) | more than 4 years ago | (#30369684)

Do you know what the difference between a natural number, an integral number, a rational number a real number and an irrational number is?
E.g. the rational number 1/3 is not perfectly describeable as a base 10 real number (0.333333333_).
There are plenty (hm, I wonder if it are "finite" ;D) base ten real numbers that can't be expressed as base 2 real numbers, one example: 0.2. It yields an periodic irrational number in base 2.

In other words, not only 0.2 is not "describable" in base 2, nor you can't calculate 0.2 + 0.2 + 0.2 + 0.2 + 0.2 ... in Base 10 this yields 1.0 (of course), in base 2 it is slightly less than 1.0. (How do you want to handle this???)
See: http://www.digitconvert.com/ [digitconvert.com]
Unfortunately 101 (that is 5!) * 0.001100110011001100110011001100110011001100110011001101 (that is 0.2) is rounded up to 1.0 by the online calculator, well probably you now fail to see my point, but exactly this rounding problem is my point.

angel'o'sphere

Re:I can guess why IBM was pushing for IEEE 754r (1)

kantos (1314519) | more than 4 years ago | (#30365938)

Intel does in fact have some support for BCD (note this is not IEEE 754r) in their processors using the BCD [wikipedia.org] opcode, this could potentially be used to implement IEEE 754r

Re:I can guess why IBM was pushing for IEEE 754r (1)

mzs (595629) | more than 4 years ago | (#30366860)

No x86 does 0-9 BCD only, 754r would need 0-999 BCD for any speed improvement compared to 11 adds with carry. Also I think that the BCD opcodes never got extended circa 80286 like the other ones did to allow other registers to be used.

Re:I can guess why IBM was pushing for IEEE 754r (1)

Yvan256 (722131) | more than 4 years ago | (#30367030)

Screw complicated BCD functions and opcodes... I just do a look-up table. Even with a microcontroller that only has 4KiB you can spare 100 bytes for a look-up table that you can write in seconds instead of wasting 40 bytes and hours of coding and testing and debugging. And don't tell me it's easy to do because not all microcontrollers have enough registers and opcodes to make BCD an easy task.

point in video where Crockford talks about this (2, Informative)

Anonymous Coward | more than 4 years ago | (#30366022)

13:51 into the video is where you want to skip to if you want to hear just this argument. I just don't have the time for the whole 55:42 video. Nice non-youtube player though.

Re:I can guess why IBM was pushing for IEEE 754r (2, Interesting)

roemcke (612429) | more than 4 years ago | (#30366348)

So 754 vs 754r boils down do this: When doing arithmetic using 754, then 0.1+ 0.2 != 0.3 (as any half decent programmer should know). IBM want to fix it with a new floating point format that can do exact calculations (under certain circumstances) with decimal numbers.

Personally a see two problems with this:

First, it won't fix the stupid programmer bug. 754r can't guarantee exactness in every situation. For instance, (large_num+small_num)+small_num == large_num != large_num+(small_num + small_num).

Second, ECMAScript is supposed to run on different architectures. It should not depend on specific number-representaions, not for integers and certainly not for floating points.

In my opinion, the right thing to do is to look at Scheme. Scheme has two numeric types, exact and inexact, and leaves it to the implementation to choose what internal representation to use (normally integer or some rational number for exact numbers, and floating point for inexact). If a function can can make an exact calculation with exact numbers as input, it returns an exact number, otherwise it returns an inexact number.
The important here is that when the programmer needs exact calculations (for instance monetary calculation with fractional parts), he specifically chooses exact numbers and leaves it to the language-implementation to figure it out how to represent the numbers in memory.

There is only one minor problem with the scheme-way, it doesn't discourage the use of inexact numbers when it is obvious that the programmer is a moron. An improvement for both ECMAScript and Scheme, could be to throw an exception whenever the programmer compares two inexact or floating point numbers for equality.

Re:I can guess why IBM was pushing for IEEE 754r (2, Informative)

Gospodin (547743) | more than 4 years ago | (#30368174)

First, it won't fix the stupid programmer bug. 754r can't guarantee exactness in every situation. For instance, (large_num+small_num)+small_num == large_num != large_num+(small_num + small_num).

Actually, 754r handles situations like these via exception flags. If large_num + small_num == large_num, then the "inexact" and "rounded" flags will be raised (possibly others, too; I haven't looked at this in a while), which the programmer can use to take some alternate logic. It's certainly true that stupid programmers can use these tools incorrectly (or not use them), but isn't that true of any system? Sufficiently stupid programmers can defeat any countermeasures.

Re:I can guess why IBM was pushing for IEEE 754r (1)

roemcke (612429) | more than 4 years ago | (#30369372)

Actually, 754r handles situations like these via exception flags. If large_num + small_num == large_num, then the "inexact" and "rounded" flags will be raised (possibly others, too; I haven't looked at this in a while), which the programmer can use to take some alternate logic.

Will these be accessible from ECMAScript? And will most programmers use them correctly?

It's certainly true that stupid programmers can use these tools incorrectly (or not use them), but isn't that true of any system? Sufficiently stupid programmers can defeat any countermeasures.

Exactly, and that is why i think 754r is a stupid hack. Depending on it makes implementations more complicated without solving the problem it is set out to solve: Programmers that haven't done their homework.

Having two numeric datatypes, exact and inexact, won't totally solve "the stupid programmer bug" either, but it will make it much easier for most programmers to understand what is going on, and do the right thing. And it will have the added benefit of letting the language implementation choose the right approach for internal numeric representation for the CPU the interpreter is running on.

Re:I can guess why IBM was pushing for IEEE 754r (1)

DragonWriter (970822) | more than 4 years ago | (#30369064)

So 754 vs 754r boils down do this: When doing arithmetic using 754, then 0.1+ 0.2 != 0.3 (as any half decent programmer should know). IBM want to fix it with a new floating point format that can do exact calculations (under certain circumstances) with decimal numbers.

Well, IBM isn't the only one (outside of the world of ECMAScript standards)--which is why IEEE754-2008 ("IEEE 754r") incorporates decimal floating point and other improvements to the old version of IEEE 754 that it has replaced as a standard.

First, it won't fix the stupid programmer bug. 754r can't guarantee exactness in every situation. For instance, (large_num+small_num)+small_num == large_num != large_num+(small_num + small_num).

The problem for non-stupid programmers is (or one of them, at any rate) that using only binary floating point prevents simple expression of calculations with simple rounding rules, when those rules are defined in terms of base-10 numbers, which is often the case in important application domains.

Second, ECMAScript is supposed to run on different architectures. It should not depend on specific number-representaions, not for integers and certainly not for floating points.

But, supporting only the old IEEE 754 standard also depends on specific number representations for floating points. It just depends on ones that are less convenient from an application perspective, though easier from an implementation perspective.

Re:I can guess why IBM was pushing for IEEE 754r (1)

amicusNYCL (1538833) | more than 4 years ago | (#30366354)

I can understand why IBM dissented, but why did Intel? If they have hardware support for the existing spec why would they dissent? Is it over another issue?

Re:I can guess why IBM was pushing for IEEE 754r (1)

LWATCDR (28044) | more than 4 years ago | (#30366490)

Is IEE754r a superset of 754? If couldn't it be used on hardware that supports it as an option?
Also what does ARM support? Some Arm cores now have FPUs and that is an important architecture for Mobile and is going to be a big percentage of the systems running ECMAScript in the future.

Will this allow us a FOSS alternative to Flash? (0)

AP31R0N (723649) | more than 4 years ago | (#30365554)

Will this allow us a FOSS (competitive) alternative to Flash?

Because that would be sweet.

Re:Will this allow us a FOSS alternative to Flash? (1)

nyctopterus (717502) | more than 4 years ago | (#30365606)

Until we get a slick gui editor for javascript+svg animation, no.

Re:Will this allow us a FOSS alternative to Flash? (0)

Anonymous Coward | more than 4 years ago | (#30365930)

The real problem is that SVG animation is still way too slow.

The current SVG implementations all took the approach of "get something done now, optimize later" and we're still paying for it. Same thing happened with Ruby, they put off optimizing because they "could do that later"... Yeah, guess what, it's a decade later and the performance still sucks ass.

Re:Will this allow us a FOSS alternative to Flash? (4, Funny)

snaz555 (903274) | more than 4 years ago | (#30366406)

Until we get a slick gui editor for javascript+svg animation, no.

In other words, we need GNU Ecmas.

Re:Will this allow us a FOSS alternative to Flash? (0)

Anonymous Coward | more than 4 years ago | (#30368584)

"In other words, we need GNU Ecmas."

Not to be confused with GNU Xmas.

Re:Will this allow us a FOSS alternative to Flash? (1)

AP31R0N (723649) | more than 4 years ago | (#30368656)

Could someone 'splain why i was modded down? It's a legit question and not even snarky in its wording.

Floating point numbers and decimals (0, Flamebait)

bradley13 (1118935) | more than 4 years ago | (#30366024)

Floating point numbers are a mess if you want to deal with currencies - rounding errors are guaranteed.

That said, look at IBM's 754r standard: unpacking a 128-bit number in chunks of 10-bits? That's got to be the ugliest thing I've seen in a long, long time. A triple-bagger. Implementing that in software will be painfully slow - implementing it in hardware will be a gigantic kludge of dedicated circuitry.

This is an area where ECMAscript pays the price for not being a strongly typed language. The only solution in the ECMAscript framework is to use a decimal library. Awkward, but that's life.

Re:Floating point numbers and decimals (2, Informative)

mdmkolbe (944892) | more than 4 years ago | (#30366648)

This is an area where ECMAscript pays the price for not being a strongly typed language.

I think you mean "statically typed" not "strongly typed".

Statically typed
Catches type errors at compile time.
Dynamically typed
Catches type errros at run time.
Strongly typed
Always catches type errors.
Weakly typed
Doesn't always catch type errors.

Sorry, as a programming languages researcher this is a pet peeve of mine. Carry on.

Re:Floating point numbers and decimals (2, Funny)

mdmkolbe (944892) | more than 4 years ago | (#30366668)

Catches type errros at run time.

And no, I don't know what the name is for a language that catches spelling errors.

Re:Floating point numbers and decimals (0)

Anonymous Coward | more than 4 years ago | (#30368626)

"Pedantic."

Re:Floating point numbers and decimals (0)

Anonymous Coward | more than 4 years ago | (#30368878)

And no, I don't know what the name is for a language that catches spelling errors.

Any???

Re:Floating point numbers and decimals (1)

aztracker1 (702135) | more than 4 years ago | (#30366656)

I fail to see how being loosely typed means that support for higher precision numbers is any uglier than in a strongly typed language. Simply adding a new up for a Decimal type that gave a stronger precision number internally, or had an extra notation for a strong number's initialization could work. IIRC, Python has a pretty decent numerics.

Re:Floating point numbers and decimals (0)

Anonymous Coward | more than 4 years ago | (#30367640)

Because you get a mixture of demical and binary numbers without any clarity.

Re:Floating point numbers and decimals (1)

DragonWriter (970822) | more than 4 years ago | (#30368624)

I fail to see how being loosely typed means that support for higher precision numbers is any uglier than in a strongly typed language.

In fact, I think Scheme demonstrates pretty effectively that being dynamically typed (which, not loose typing, is what JavaScript is) is entirely compatible with having an extraordinarily elegant system of numeric representation which seamless scales from exact integer representation through exact rational representation through to inexact representations.

Context (0, Flamebait)

Anonymous Coward | more than 4 years ago | (#30366484)

What the fuck is ECMAScript?

Re:Context (2, Insightful)

revlayle (964221) | more than 4 years ago | (#30366548)

formal name of JavaScript - now turn in your geek card

Any other notable changes? (0)

Anonymous Coward | more than 4 years ago | (#30367138)

Folks it took 10 years and the only thing people talk about is JSON!
No namespaces, no concurrency, nothing?
Isn't it too little for 10 years?

Re:Any other notable changes? (2, Interesting)

Bill, Shooter of Bul (629286) | more than 4 years ago | (#30368486)

Read the article, it provides some explanation of why things are the way they are.

Plus, what exactly are your other options? Doing the entire page in flash or active X?

Also, sort of makes perl 6's development look a lot better, doesn't it?

Floating point representation (3, Interesting)

XNormal (8617) | more than 4 years ago | (#30367994)

The floating point representation issue could be resolved the same way it is handled in Python 3.1 by using the shortest decimal representation that is rounded to the exact same binary floating fraction.

With this solution 1.1 + 2.2 will show as 3.3 (it doesn't now) but it will not test as equal to 3.3. It's not as complete a solution as using IEEE 754r but it handles the most commonly reported problem - the display of floating point numbers.

See What's New In Python 3.1 [python.org] and search for "shortest".

Still no O(1) data structure (3, Interesting)

Tailhook (98486) | more than 4 years ago | (#30368108)

Back when ECMAScript 4 was still alive there was a proposed Vector class [adobe.com] that had the potential to provide O(1) access. This is very useful for many performance sensitive algorithms including coding, compression, encryption, imaging, signal processing and others. The proposal was bound up with Adobe's parameterized types (as in Vector<T>) and it all died together when ECMAScript 4 was tossed out. Parameterized types are NOT necessary to provide dense arrays with O(1) access. Today Javascript has no guaranteed O(1) mechanism, and version 5 fails to deal with this.

Folks involved with this need to consult with people that do more than manipulate the DOM. Formalizing JSON is great and all but I hadn't noticed any hesitation to use it without a standard... ActionScript has dense arrays for a reason and javascript should as well.

Re:Still no O(1) data structure (1)

olau (314197) | more than 4 years ago | (#30368974)

Why not just make the interpreter a bit smarter? I think Firefox's array implementation will use a continuous array unless you try to use it in an associative fashion.

Also when you're talking about pixels, you need to know the type of each element so you can put the elements themselves (and not pointers to them) next to each other in the array to get anywhere near compiled C code speed. How can you do that without either a smarter interpreter or some kind of type system where you specify it explicit? If you're going to make the interpreter smarter, you might as well try to reuse the existing array syntax. :)

By the way, strictly speaking the built-in array is actually O(1) amortized as far as I know and a continuous array is also only O(1) amortized if you want to be able to add elements dynamically. But sure, hash lookup is slower than indexing.

Re:Still no O(1) data structure (0)

Anonymous Coward | more than 4 years ago | (#30369386)

While it's not guaranteed by the spec, I highly doubt you'll find any modern implementation for which Array objects
with dense indices do not use an array backing store.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?