Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

cancel ×

456 comments

Decimal Arithmetic (4, Insightful)

(1+-sqrt(5))*(2**-1) (868173) | more than 7 years ago | (#15900491)

From TFA:
Example 1: showing approximation error.

// some code to print a floating point number to a lot of
// decimal places
int main()
{
float f = .37;
printf("%.20f\n", f);
}
The main problem with that example, I take it, is that single-precision datatypes are only guaranteed for roughly seven decimal places; using double, of course, only defers the problem.

What about encoding floats as a pair of ints or longs: one to express the numerical value, and the other its tenth power; id est, decimal arithmetic [ibm.com] ?

Re:Decimal Arithmetic (0)

Anonymous Coward | more than 7 years ago | (#15900517)

That gives you a much narrower range. Which is fine if it suits your application, but it's not sufficient for general use.

Re:Decimal Arithmetic (0)

Anonymous Coward | more than 7 years ago | (#15900555)

the real problem is that the person who started this thread DOESN'T think, most of us do. I can't believe that this is a "story" today. UGH!

Re:Decimal Arithmetic (0)

Anonymous Coward | more than 7 years ago | (#15900521)

That wastes a lot of bits.

Re:Decimal Arithmetic (5, Insightful)

Anonymous Coward | more than 7 years ago | (#15900528)

This is not newsworthy. This is computer science 101.

Re:Decimal Arithmetic (0)

Anonymous Coward | more than 7 years ago | (#15900606)

Nah, this isn't 101. This isn't typically taught in 101. Floating point (the real floating point, not just using 'float' or 'double' keywords) is usually taught in computer hardware classes or scientific computing classes. Even there, they usually don't teach COBOL numbering system either.

Anyway, the article is stupid, probably geared toward people who haven't had the hardware classes.

Re:Decimal Arithmetic (2, Insightful)

tomstdenis (446163) | more than 7 years ago | (#15900630)

My college taught "numerical analysis" in the software comp.eng side. You learn the format of IEEE types, the range, accuracy, precision issues, etc.

We had assignments to not only perform matrix ops but also give the expected error, etc.

Maybe the author of the article should either go to a better school or pay more attention to the classes.

Tom

Re:Decimal Arithmetic (2, Interesting)

Duhavid (677874) | more than 7 years ago | (#15900635)

It is a little newsworthy.

I bothered to ask the question of what to use for monitary
usage at a financial institution in my recent past. I was
a bit ( pardon the pun ) suprised to get a blank stare, to
have to explain what I was talking about. Floats where good
enough. Course, I had a problem in .net with iterating thru
a list of values ( testing, each was .1, for 10% ), and the
sum wasnt 1.0. Had to do a bunch of

decimal.parse(value.ToString())

to get things to sum up correctly.

Re:Decimal Arithmetic (3, Insightful)

modeless (978411) | more than 7 years ago | (#15900752)

Floats where [sic] good enough. [...] Had to do a bunch of
decimal.parse(value.ToString())
to get things to sum up correctly.
Oh god. Please tell us which financial institution you worked for so we can all avoid it like the plague.

Re:Decimal Arithmetic (1)

Nutria (679911) | more than 7 years ago | (#15900777)

I bothered to ask the question of what to use for monitary
usage at a financial institution in my recent past.


http://en.wikipedia.org/wiki/Packed_decimal [wikipedia.org]

All CISC CPUs had opcodes to do the work, but AFAICT only COBOL (being, of course, a Business Oriented Language) implemented BCD as a primary data type.

Damned shame, too, since it eliminates all the hassle of working with financial software.

Re:Decimal Arithmetic (4, Insightful)

Bender0x7D1 (536254) | more than 7 years ago | (#15900649)

Exactly. Unfortunately, there are too many people out there who are programmers, even good ones, who don't know, or understand, the basics. While I'm not claiming that formal education is the only way to get the knowledge you need, it is a good way to avoid gaps in your knowledge. I hated some of the computer science classes I had to take, but I did learn something important in each and every one of them.

Another advantage in the formal classes is you get the theory that allows you to make decisions on what data types to use and when. Sometimes you need the precision of BigNum systems, (crypto for example), and sometimes the accuracy of float is enough. For example, in a lot of financial applications, float would be good enough since 2 decimal places is enough. If you need performance, float will beat any BigNum system hands down. However, if you are dealing with decimals on top of decimals, (such as calculating someone's dividend from a mutual fund where they own partial shares), you might need BigNum. Either way, with the proper theory and good understanding of the formats, you can make these decisions.

These situations are why I am a big supporter of actual software engineering instead of programming. Sure, standard programming is great for a lot of situations, but serious applications need to use software engineering practices. You wouldn't build a bridge without an engineer, so why build an application that handles billions of dollars without applying the same rules and principles?

Re:Decimal Arithmetic (2, Insightful)

gweihir (88907) | more than 7 years ago | (#15900679)

These situations are why I am a big supporter of actual software engineering instead of programming. Sure, standard programming is great for a lot of situations, but serious applications need to use software engineering practices. You wouldn't build a bridge without an engineer, so why build an application that handles billions of dollars without applying the same rules and principles?

I could not agree more. The issue is not to get it done fast or cheap. The issue is that the person designing the solution does understand what the limitations of the tools used are. Anybody that builds mission critical stuff without good engineers as designers and supervisors gets what they deserve. Same is true anywhere. Trouble with programming is that bridges collapse far more newsworthy than software.

Not really CS101... (1)

Brian_Ellenberger (308720) | more than 7 years ago | (#15900668)

I wouldn't call it CS101. In CS101 you learn, of course, that computers are finite machines and can't hold infinite values exactly (ex PI). Going into the precision detail would require a course in Numerical Analysis [wikipedia.org] . Since you know there will always be some error, one of the components of Numerical Analysis is figuring out what the error will be. From that you can determine what error threshhold is acceptable and perform the calculations.

Re:Decimal Arithmetic (3, Informative)

Anomie-ous Cow-ard (18944) | more than 7 years ago | (#15900536)

Since these are computers, and they deal primarily with binary internally, why not store the numerical value and the 2nd power instead? Oh, and since we generally need more bits of accuracy in the numerical value than the exponent (do you often deal with numbers 2**(2**32)?), why not allocate a "reasonable" number of bits to the exponent and leave more for the numerical value.

Uh oh, we just re-invented floating point. Oh well, nice try.

If you were just trying to get better accuracy by using base 10 rather than base 2, you're just hiding the problem (and making the hardware quite a bit more complex). If you want true accuracy, abandon floating point and use a bignum system.

Re:Decimal Arithmetic (1)

Waffle Iron (339739) | more than 7 years ago | (#15900578)

What about encoding floats as a pair of ints or longs: one to express the numerical value, and the other its tenth power; id est, decimal arithmetic?

Is there any fundamental reason why decimal arithmetic in a computer should be more accurate than binary arithmetic in a computer? Both are approximations that use some small subset of the rational numbers to try to represent the entire continuous range of real numbers. Probably the only reason that decimal computer arithmetic seems better to people like accountants is that the errors it generates are the same errors that they've been manually creating for centuries.

Re:Decimal Arithmetic (5, Informative)

gweihir (88907) | more than 7 years ago | (#15900692)

Is there any fundamental reason why decimal arithmetic in a computer should be more accurate than binary arithmetic in a computer?

No, no, the problem is not with the precision! The problem is that when input and output is decimal, but the calculation is binary, then you get additional errors from the conversion that badly educated programmers do not expect.

Re:Decimal Arithmetic (0)

Anonymous Coward | more than 7 years ago | (#15900580)

The problem of using your suggested method is that it is much much slower because there's no hardware implementation (and even the hardware implementation would, I imagine, be slower than the floating point one). It is slower, requires more die space (in terms of register storage & the size of the ALU), and is more complicated to design. I'm not saying that there's no point to implementing this in hardware (or increasing the precision), but there's reasons why it's not supported (and regardless of what representation you choose, you should always be aware of the limitations of the number system your using).

Re:Decimal Arithmetic (4, Insightful)

SageMusings (463344) | more than 7 years ago | (#15900610)

Okay,

Show of hands: Who did not already understand that floats are approximations? Anyone? I didn't think so. I've gotta wonder why this story ever made it into Slashdot. This is more worthy of Time magazine where it can be spun as a startling new revelation into the dirtier corners of computer science and foisting a lie on the public.

Re:Decimal Arithmetic (1)

Omega1045 (584264) | more than 7 years ago | (#15900661)

I have to agree. If I did not know this in high school, I certainly knew it in the first week or two of the first CompSci class I took my first semester at school. Not trying to be snotty, but this is really obvious stuff in the CompSci world.

Re:Decimal Arithmetic (0)

Anonymous Coward | more than 7 years ago | (#15900741)

Maybe there should be an entrance exam to become a Slashdot participant. :)

Re:Decimal Arithmetic (1)

no-body (127863) | more than 7 years ago | (#15900742)

The link text is a misnomer: "what goes on inside that floating point unit"

I was expecting some information about a FPU unit - parallel processing, pipelining and all that.

But it links to: "The trouble with rounding floating point numbers"

Kind of shallow...

Re:Decimal Arithmetic (1)

JNighthawk (769575) | more than 7 years ago | (#15900623)

Example 1: showing approximation error. // some code to print a floating point number to a lot of // decimal places
int main()
{
float f = .37;
printf("%.20f\n", f);
}


I'd say the problem is they're trying to store a double in a float. If they wanted to store a float, they should have done:
float f = 0.37f;

Re:Decimal Arithmetic (5, Informative)

Fordiman (689627) | more than 7 years ago | (#15900747)

No, C will automatically recast a number as needed in cases like the above.

The issue is actually a pretty commonly understood situation when going from decimal floating point numbers to binary IEEE floats (I have another comment on here describing how they're stored), and it basically comes down to this:

Floats of any sort are stored as an int with an int shift (a.aa x b^c). As such, there will be aliasing problems based on the prime components of b. A known percentage of divisors will produce repeating numbers. For example, any division of 3,5,7,11.... in base 2 will be repeating. Any division of 3,7,11,13... in base 10 will be repeating.

No, there's nothing you can do about it. Use higher precision if needed, and otherwise get over it.

Re:Decimal Arithmetic (0)

Anonymous Coward | more than 7 years ago | (#15900657)

From TFA: More important, perhaps, how do we avoid this problem? In a follow-up article I'm going to look at a different way of doing arithmetic (decimal arithmetic), used by lots of real world software, that avoids the approximation error in floats.

Re:Decimal Arithmetic (1)

gweihir (88907) | more than 7 years ago | (#15900669)

What about encoding floats as a pair of ints or longs: one to express the numerical value, and the other its tenth power

Old news. Of course that is the way to do it if you need exact decimals. If you have a limited range, then you can also just use one int and a fixed exponent, i.e. fixed-point arithmetric. Ise a long-number package (e.g. GNU mp) if you need more precision.

The whole article is about a very old and very well known and understood porblem. My guess is the real problem is the quality of the programmers that run into this and do not expect it.

Re:Decimal Arithmetic (1)

Fordiman (689627) | more than 7 years ago | (#15900695)

They kinda are...

IEEE floats are encoded as binary data, that is, as a base-2 fixed-point number. We first assume that the first bit (the only one before the decimal (binimal?) point) is 1. We can assume this because, in base 2, a properly aligned SCI number will have 1 as its first digit. As such, we don't have to store it.

The later digits represent sucessive divisions of base-2: 1/2, 1/4, 1/8, 1/16, etc. There is also stored the shift.

So, basically, they're stored as 1.BBBB x 2^N

Is this the most efficient? Well, for a computer, yes. It makes math using them a hell of a lot easier.

It's also storage efficient for the free bit we get.

Of course, this means that many decimal numbers are innaccurately stored; 0.37, for example, would need stored as:
(1.)011110101110000100100011... (x2^)-10

Is this good? bad? Well, I don't know. In Base 15, to pick a number at random, 1/3 is stored as 0.5, while in decimal it's 0.333333....

You could use fractional math, but even in higher level code it's a freaky amount of work (getting LCD and GCD for fractional reduction and such is messy in code). You could store everything as decimal, but you end up with the same inaccuaracies as with binary, just with different characteristics.

Talk to anyone who uses noninteger math on a regular basis. They'll tell you that when dealing with floats, you always expect some error; the way to handle it is to determine what your maximum percent error is, and add one significant bit past that.

Why I only use decimal values (3, Interesting)

eliot1785 (987810) | more than 7 years ago | (#15900708)

This is why I use DECIMAL and not FLOAT in MySQL. Problem solved. I'm not a big fan of floats, the extreme precision that they seem to have is mostly an illusion.

Re:Why I only use decimal values (1)

Fordiman (689627) | more than 7 years ago | (#15900772)

Dunno why that was modded funny. It's somewhat true.

The potential error in any float is 1/(2^N) where N is the number of bits used to store the significant digits (called the mantissa)... just like the potential error in a written decimal number is 1/(10^N).

So? Well, for stuff where you need standard (N=3) float-error, use N=10 for binary. You won't find this, mind you.

Just for reference:
IEEE 754 (standard floating point numbers) in 32-bit uses 23 bits for its mantissa, and 8 bits as its exponent. Maximum static error in storage: 1/8388608
in 64-bit, it uses 52 bits for the mantissa and 11 for the exponent. Maximum storage error: 1/4.5 Quadrillion
Note that 754 uses a 'hidden' bit, being the unnecessary 1 at the start of the mantissa (in binary, since the number needs to be aligned to the first nonzero, and the only nonzeron in binary is 1, you don't need to store the first bit of the mantissa. Still, motorola 80 bit numbers do bother to store it)

Obligatory (0, Offtopic)

karvind (833059) | more than 7 years ago | (#15900497)

Office Space

Re:Obligatory (2, Informative)

andrewman327 (635952) | more than 7 years ago | (#15900593)

You beat me to it. I was just thinking about how much better TFA would be if they explained the specifics of how the Office Space team ripped off the banking system.

Re:Obligatory (1)

SinGunner (911891) | more than 7 years ago | (#15900620)

suggesting that someone do that constitutes a terrorist act.

terrorist.

Honestly (1)

TheShadowzero (884085) | more than 7 years ago | (#15900514)

I can't tell if that was /.ed already or not, but all I see is:

#MenuCode a,

hmm...

Re:Honestly (0)

Anonymous Coward | more than 7 years ago | (#15900587)

Their use of a floating point number backfired on them.

decNumber libary from IBM (5, Informative)

Not The Real Me (538784) | more than 7 years ago | (#15900516)

This is why I use the decNumber library from IBM.

http://www2.hursley.ibm.com/decimal/decnumber.html [ibm.com] The decNumber library implements the General Decimal Arithmetic Specification[1] in ANSI C. This specification defines a decimal arithmetic which meets the requirements of commercial, financial, and human-oriented applications.

The library fully implements the specification, and hence supports integer, fixed-point, and floating-point decimal numbers directly, including infinite, NaN (Not a Number), and subnormal values.

The code is optimized and tunable for common values (tens of digits) but can be used without alteration for up to a billion digits of precision and 9-digit exponents. It also provides functions for conversions between concrete representations of decimal numbers, including Packed Decimal (4-bit Binary Coded Decimal) and three compressed formats of decimal floating-point (4-, 8-, and 16-byte).

Re:decNumber libary from IBM (0)

Anonymous Coward | more than 7 years ago | (#15900543)

and if that doesn't count as "news for nerds" I'm turning in my slide rule!

Gnu Multiple Precision (1)

kybred (795293) | more than 7 years ago | (#15900729)

Is GMP [swox.com] similar to that? I used it to approximate the Poisson function for BER calculations. Pretty easy to use.

Re:decNumber libary from IBM (3, Informative)

piranha(jpl) (229201) | more than 7 years ago | (#15900740)

Rational number arithmetic is a more general solution. Any number that can be expressed in decimal or floating-point notation is rational; any rational number can be expressed as (n/d), where n and d are integers. We have "bigints;" unbounded-magnitude integers constrained only by the memory of the computer they are stored on. Rational numeric data types pair two bigints together to give you unbounded magnitude and precision, and have been implemented for decades.

They probably aren't directly supported in your favorite programming language because they are slow to work with when you need very high precision; after each calculation, the rational number needs to be reduced to its lowest terms. This involves factoring, which takes time proportional to the the terms themselves.

Consider the use of integers, floats, or decimals only as an optimization when it has been shown that an application is suffering a serious performance hit because of rational arithmetic, and when you can use a faster data type knowing that your program will perform within accuracy goals.

For 90% of computing problems, monetary calculations included, you shouldn't even have to worry about what numeric type you're using. Your language should assume rationals unless told otherwise. Common Lisp, Scheme, and Nickle do exactly that.

C developers can use GMP [swox.com] . Other developers can use one of many bindings to GMP.

Use A Proper Decimal Library (2, Informative)

Anonymous Coward | more than 7 years ago | (#15900526)

If you are actually concerned about rounding and precision, use decimal [ibm.com] instead.

I am Intel of Borg (4, Funny)

www.sorehands.com (142825) | more than 7 years ago | (#15900540)

I am Intel of Borg, you will be approximated.

There have been many examples, such as the original pentium bug. Of course, there was a bug in Windows Calc, it was 2.01 - 2.0 = 0 (If I remember correctly).

Re:I am Intel of Borg (1)

Saxophonist (937341) | more than 7 years ago | (#15900611)

While Intel's gaffe is possibly the most famous, consider this tidbit from Applesoft Basic:

] PRINT 8 + .01 + .01
8.020000002

Granted, this was a software implementation bug, not a hardware bug. (I don't recall the exact precision of the response, but it was something of that nature.)

The author is seriously confused (5, Insightful)

mlyle (148697) | more than 7 years ago | (#15900542)

Apparently the author of the article didn't read the stories in RISKS that he cited. In particular, the 'pensioners being shortchanged' one talks about them not being paid interest on 'float'-- cash flow on transactions in progress. This has little to do with floating point numbers.

Similarly, the spacecraft problem mentioned is one of an errant cast, not because of dilution of precision in floating point calculations.

The author could really pick his examples better-- as mistakes in numerical programming happen often and are often of great import.

A good example of the evils of math. (1)

bigmaddog (184845) | more than 7 years ago | (#15900719)

If we think back to the good old days of the first Gulf War and all that, we might remember the Patriot missile and what a dismal failure that was. Part of the problem there was that the missile's clock values were such that they would not convert to base 2 (and hence to float) accurately and so the tracking was off and lots of expensive misses happened. If you recall, lots of US soldiers died when a Scud that theoretically ought to have been shot down hit their barracks.

As usual, it's not just one thing that screws everything up, not even in the narrow confines of the Patriot's software problem. Here's a short write-up [siam.org] on that math/software part of it. There were other issues with the Patriot but that'd be blatant off-topic flamebait. ;)

Re:A good example of the evils of math. (4, Informative)

codegen (103601) | more than 7 years ago | (#15900774)

Part of the problem there was that the missile's clock values were such that they would not convert to base 2 (and hence to float) accurately and so the tracking was off

Actually the problem was that they used a float to store the system time (time since power on) in the ground radar unit. It allowed the clock to be used in calculations without a conversion. A float will store an integer just fine (and accurately) until the number gets too large and then the units part drops off the bottom of the precision and the increment operator no longer makes any sense. This was a design decision that made sense for the role for which the missle platform was originally designed. The patriot was originally designed to be used in the European Theater (if the cold war ever turned hot) and as such would never remain in one location for more than a very few days.The clock is reset everytime they move the battery (they power off the ground tracking radar when they move). The use in the gulf war was in a strategic role (not tactical) which kept them continuously operating in a single location for long periods of time, and the shortcut they used came back to haunt them (as usual). If they had reset the system every few days, the problem would not have occured.

Not news. (5, Insightful)

SJasperson (811166) | more than 7 years ago | (#15900545)

This is not a new problem. Or an unsolved one. Is there any modern programming language that does not supply a data type or library with exact decimal arithmetic support? Using a float to represent monetary amounts and expecting them to be free of rounding errors is as stupid as using integers to store zip codes and wondering where the leading zeros went from all the addresses in New England. If you can't be arsed to choose the right data type get out of the business.

The business... (1)

kingkade (584184) | more than 7 years ago | (#15900582)

This happens more often than you might think. I used to work for a large American software corp. One of their products which I had the pleasure of being on one of two teams who were working on it, had an accounting portion that used floating point columns in our DB schema and in code. I just was kind of disgusted and a little disturbed after finding this software's used by some pretty large organizations to track assets, etc.

This reminds me of something someone I knew said once: you don't really have to be intelligent to work in this industry or even get through school, just like everything else in life, it seems that brute force is enought to win most of the time.

Re:Not news. (1)

SuperBanana (662181) | more than 7 years ago | (#15900602)

Using a float to represent monetary amounts and expecting them to be free of rounding errors is as stupid as using integers to store zip codes and wondering where the leading zeros went from all the addresses in New England

For those of us who aren't programming geniuses- what would you use to store a monetary amount, besides a floating-point format?

Re:Not news. (0, Troll)

kingkade (584184) | more than 7 years ago | (#15900619)

A fixed-point (or "decimal" types). see decimal in c#, BigInteger in Java, money/dec in SQL server, etc. Pay attention, and please tell me youre still in high school.

Re:Not news. (1)

WuphonsReach (684551) | more than 7 years ago | (#15900625)

For those of us who aren't programming geniuses- what would you use to store a monetary amount, besides a floating-point format?

In databases? Currency formats. Which are specifically designed not to lead to rounding errors. Some of them even allow you to specify the number of places after the decimal.

In code? A numeric type designed for currency work (typically an add-in library). In a pinch you can use a 32bit integer and use the last 2 decimal digits as "cents", but you'll run the risk of overflow.

Re:Not news. (1)

colinrichardday (768814) | more than 7 years ago | (#15900671)

In code? A numeric type designed for currency work (typically an add-in library). In a pinch you can use a 32bit integer and use the last 2 decimal digits as "cents", but you'll run the risk of overflow.

In my case, only on the negative side :-)

Re:Not news. (1)

MajroMax (112652) | more than 7 years ago | (#15900626)

For those of us who aren't programming geniuses- what would you use to store a monetary amount, besides a floating-point format?

Without the use of any libraries? Integers -- just use cents as the base unit of currency, and convert to dollars strictly on input and display.

If you're dealing with amounts of cents that could possibly start overflowing even a 32-bit int (that is, billions of cents, or tens of millions of dollars), then the application's important enough to be worth the cost of further research on the matter.

Re:Not news. (0)

Anonymous Coward | more than 7 years ago | (#15900634)

Easy. An integer. Store the value in cents, not in dollars. Then when you need to print it out, you'd do something like
printf("$%d.%2d", (val / 100), (val % 100));
(Approximately; you get the general idea.)

If you need fractions of a cent, you'd store the value as multiples of that fraction, and do the conversion to currency at the absolute last possible moment.

The only case where this would cause problems is when you need to do a division of some sort, but all of this stuff is a very well known, and already solved, problem. (Oh, irony: the captcha for this post is "divisive".)

Re:Not news. (1)

Euler (31942) | more than 7 years ago | (#15900656)

Decimal type, (not as commonly available as floating point type.) You could also store money as an integer by counting cents. Or you could use an integer for dollars and another for cents.

Re:Not news. (0)

Anonymous Coward | more than 7 years ago | (#15900675)

> Or you could use an integer for dollars and another for cents.

Isn't that just like a 64-bit int except less efficient?

Re:Not news. (1)

gweihir (88907) | more than 7 years ago | (#15900701)

For those of us who aren't programming geniuses- what would you use to store a monetary amount, besides a floating-point format?

E.g. long long with cents as the unit. Gives you a maximum value of 180 000 000 000 000 000 full currency units. Should be enough for most apps and gives you exact calculations. And takes only 64 bit,
same as double.

Re:Not news. (1, Informative)

Duhavid (677874) | more than 7 years ago | (#15900716)

Binary Coded Decimal would be one.

I looked for a page that described the advantages
of BCD, but I could not find one. So I'll have a
stab at it myself. Basically, while slower, BCD
can maintain arbitrary precision. If you have
monitary items and you have a good handle on the
range of values, you can store and operate on these
values without any rounding losses at all.

Re:Not news. (1)

codegen (103601) | more than 7 years ago | (#15900759)

For those of us who aren't programming geniuses- what would you use to store a monetary amount, besides a floating-point format?

Decimal data types. In COBOL or PLI (in which most of these applications are written in) you use a PIC data type.For example,

PIC 9999V999

says the number has 4 integer digits and 3 fractional digits. It also may not hold a negative number. You add an S character to the front to allow negative numbers.

The language runtime interprets the numbers and there is no approximation involved. There are strict rules for overflow, underflow and roundoff. You can delcare any numeric type up to a total of 18 digits in these languages. Other comments have already referred to the C library and to similar types in other programming languages.

Re:Not news. (1)

smallpaul (65919) | more than 7 years ago | (#15900638)

Did anyone claim it is a new problem? Of course it will be new to some people and old hat to others. As soon as I saw the title I knew what it woudl be about, but there are always new programmers coming through the system who don't know this stuff.

Re:Not news. (4, Funny)

mcrbids (148650) | more than 7 years ago | (#15900654)

Using a float to represent monetary amounts and expecting them to be free of rounding errors is as stupid as using integers to store zip codes and wondering where the leading zeros went from all the addresses in New England.

Hrrmm, well...

That would explain our lack of customer response in New England...

Re:Not news. (1)

fm6 (162816) | more than 7 years ago | (#15900762)

Using a float to represent monetary amounts and expecting them to be free of rounding errors is as stupid as using integers to store zip codes and wondering where the leading zeros went ...

It's not stupid, it's just ignorant.

Your comparison with integer zip codes is totally bogus: that's not an arithmetic error, it's just sloppy formatting. The variable contains a perfectly accurate value — you just have to remember to output it with a %05d format. (Of course, since you don't do arithmetic on zip codes, you might as will forget the whole problem and just store the thing as a string.) On the other hand, the rounding errors you get when you try to store decimal fractions as floating point stem from an extremely unobvious fact: many common values (such as 0.1) do not have a binary representation.

Tax (1)

nusuni (994260) | more than 7 years ago | (#15900558)

Would certainly be a real kicker to find out that these tiny miscalculations caused everyone to pay more tax than they needed to. Where's my refund check?!

Re:Tax (1)

darkitecture (627408) | more than 7 years ago | (#15900699)

Would certainly be a real kicker to find out that these tiny miscalculations caused everyone to pay more tax than they needed to. Where's my refund check?!

I'm sorry but you already spent it. The money that wasn't there from the budget surplus we didn't have was spent on providing tax relief that wasn't actually much of a relief in an attempt to stimulate the economy, which it didn't do.

science; business (4, Insightful)

bcrowell (177657) | more than 7 years ago | (#15900563)

He talks about scientific applications, but actually very few scientific calculations are sensitive to rounding error. Remember, they sent astronauts to the moon using slide rules. Generally for scientific applications, you just don't want to roll your own crappy subroutines for stuff like matrix inversion; use routines written by people who know what they're doing. (And know the limitations of the algorithm you're using. For example, there are certain goofy matrices that will make a lot of matrix inversion algorithms blow chunks.)

For business apps, the classic solution was to use BCD arithmetic. But today, is it more practical (and simple) just to use a language like Ruby, that has arbitrary-precision integers, so you can just store everything in units of cents? A lot of machines used to have special BCD instructions; do those exist on modern CPUs?

Re:science; business (1)

rubycodez (864176) | more than 7 years ago | (#15900651)

you'll be delighted to know your 80386 or later x86 supports packed and unpacked binary coded decimal. incidently, matrix inversion is a crappy way to solve linear systems, there's much better ways that don't cause tiny approximation errors to magnify many-fold.

Re:science; business (2, Funny)

Jeremi (14640) | more than 7 years ago | (#15900707)

But today, is it more practical (and simple) just to use a language like Ruby, that has arbitrary-precision integers, so you can just store everything in units of cents?


Hmm.... if you use integers of any given finite precision, aren't you still subjecting yourself to round-off error? (e.g. ((int)4)/((int)3) == 1!!) On the other hand, if you use a string-based infinite-precision datatype, what happens when you try to compute an non-terminating number (e.g. 1.0/3.0)? Perhaps your program crashes after trying to allocate an infinite amount of RAM to store the result? ;^)


Seems to me the only full solution to round-off error would be to store the results of certain math operations as strings indicating the underlying mathematical/algebraic expressions (e.g. 1.0/3.0 == "1/3"), a la Matlab... but then, I'm no expert, perhaps there is a better way.

Should read... (0)

Anonymous Coward | more than 7 years ago | (#15900565)

Should read, "write code that does arithmetic." Don't kid yourself and think you are doing anything remotely close to
real mathematics.

This is not a "problem" per se (2, Interesting)

Null Nihils (965047) | more than 7 years ago | (#15900583)

float (and the big brother double) is inaccurate. Its no surprise. A 32-bit Float is but a single simple tool in a programming language. If anyone is surprised by how Floats behave then they are, most likely, inexperienced.

You don't start addressing a problem in software just by assuming Float or Double will magically fill every need. An experienced programmer needs to have a knowledge of how to use, and how not to use, the programming tools at hand. TFA about floating point numbers is very introductory (at the end it mentions that the next article will tell us how to "avoid the problem"... I assume it will go on to cover some basic idioms.) In a way it misses the point: Floating-point rounding is not a "problem". Floats and Doubles always do their job, but you have to know what that job is! The behaviour of floating point numbers should not be a big surprise to a seasoned coder.

For example: You can't use float or double to store the numerical result of a 160-bit SHA-1 hash... you have to use the full 160 bits. (Duh, right?) So, if you use a mere 32 bits (float) or 64 bits (double) to store that number, you are going to sacrifice a lot of accuracy!

Re:This is not a "problem" per se (1)

kingkade (584184) | more than 7 years ago | (#15900613)

For example: You can't use float or double to store the numerical result of a 160-bit SHA-1 hash... you have to use the full 160 bits. (Duh, right?)

?

I don't get it...why would you even try, what's the point of mentioning something so obvious? It's like me saying: "Hey don't even try storing your new titanium, five-iron in your asshole: since your rectum is only a few inches wide and a five-iron is over a meter long!"

Re:This is not a "problem" per se (0)

Anonymous Coward | more than 7 years ago | (#15900670)

since your rectum is only a few inches wide and a five-iron is over a meter long!"


Well, obviously you don't try to stick it in sideways !

You don't need to use floats to do math (0)

Anonymous Coward | more than 7 years ago | (#15900584)

I did a model of universal expansion using only ints that proved conclusively that the universe is 6 years old.

Use Gnumeric and the Source. (1)

twitter (104583) | more than 7 years ago | (#15900598)

1.125*59=66.375 in gnumeric. Yes, it rounds to 66.38. So it's not really a problem and you can beat the author to the punch by looking at the gnumeric source.

Re:Use Gnumeric and the Source. (-1, Offtopic)

ResidntGeek (772730) | more than 7 years ago | (#15900755)

...

Are you serious?

Are you FUCKING serious?

Numbers and bases (4, Insightful)

Todd Knarr (15451) | more than 7 years ago | (#15900600)

We have the same problem in everyday numbers. Try representing 1/3 in any finite number of digits. You can't. The big thing about floating-point numbers that trips people up is that we're used to thinking in base 10. Floating-point numbers in computers typically aren't in base 10, they're in base 2. The rounding problem he describes is simply us getting confused and wondering why a fraction with an exact representation in base 10 doesn't have an exact representation in base 2. The obvious solution is the one he alludes to at the end: don't use base 2. Computers have had base-10 arithmetic in them for decades, in fact the x86 family has base-10 arithmetic instructions built in (the packed-BCD instructions). COBOL has used packed-BCD since it's beginning, which is why you don't find this sort of calculation error in ancient COBOL financial packages running on mainframes.

Re:Numbers and bases (1)

kingkade (584184) | more than 7 years ago | (#15900627)

You know, your low user ID would imply you've been around a while, but you totally missed the point. The base 2 is not the only problem and you don't need to switch and/or use BCD to avoid these problems. Read the article again (or for the first time) and if you still don't get it look up "decimal" or "fixed-point" types in your favorite strongly-type programming language.

Re:Numbers and bases (1)

4D6963 (933028) | more than 7 years ago | (#15900726)

I think the GP poster has quite got the point tho. The problem isn't the IEEE-754 floating point, the problem is people not getting it right, in other words, people not understanding the binary precision of the mantissa when brought back to decimal. In school we were taught not to trust the IEEE-754 floats because it may represent 0.37 like 0.369999998, but the thing is that in real-life problems, well at least my real-life problems, that type of thing doesn't matter to me because my numbers wouldn't be particularly round in the decimal base. This being said, if you got a problem with getting 0.369999998 when you expect 0.37, then round off to the number of decimals you want, other than that, there is no problem once you trully understand how the thing works.

Re:Numbers and bases (3, Funny)

tawhaki (750181) | more than 7 years ago | (#15900641)

Try representing 1/3 in any finite number of digits.

0.3. All you need is base 9 :)

Re:Numbers and bases (1)

gweihir (88907) | more than 7 years ago | (#15900715)

We have the same problem in everyday numbers. Try representing 1/3 in any finite number of digits. You can't.

That one is real easy: Don'r use a "real" format, i.e. floats. Use a "rational" format, i.e. two integers and the value is the one divided by the other. This is one of the standard formats in the Gnu multiprecision library.

Should be obvious really from standard shool mathematics, float approcimates R, but Q can be done exactly and is wherever needed and at least somebody has elementary mathematics skills.

Re:Numbers and bases (3, Insightful)

flibbajobber (949499) | more than 7 years ago | (#15900722)

Try representing 1/3 in any finite number of digits. You can't.

"1/3"

You can. I just did. So did you. In base-10, even. In fact, the answer is the same for base-4 or higher. Using only two digits, "1" and "3". Any rational number can be represented using a finite number of digits, using... (wait for it) a RATIO.

(Represent one-third in Base 2? why that would be "1/11". One-third in Base 3 would be "1/10".)

It used to be much worse. Kahan fixed it. (5, Interesting)

Animats (122034) | more than 7 years ago | (#15900618)

Due to the efforts of Willam Kahan [berkeley.edu] at U.C. Berkeley, IEEE 754 floating point, which is what we have today on almost everything, is far, far better than earlier implementations.

Just for starters, IEEE floating point guarantees that, for integer values that fit in the mantissa, addition, subtraction, and multiplication will give the correct integer result. Some earlier FPUs would give results like 2+2 = 3.99999. IEEE 754 also guarantees exact equality for integer results; you're guaranteed that 6*9 == 9*6. Fixing that made spreadsheets acceptable to people who haven't studied numerical analysis.

The "not a number" feature of IEEE floating point handles annoying cases, like division by zero. Underflow is handled well. Overflow works. 80-bit floating point is supported (except on PowerPC, which broke many engineering apps when Apple went to PowerPC.)

Those of us who do serious number crunching have to deal with this all the time. It's a big deal for game physics engines, many of which have to run on the somewhat lame FPUs of game consoles.

Re:It used to be much worse. Kahan fixed it. (0)

Anonymous Coward | more than 7 years ago | (#15900739)

80bit was not part of IEEE. I don't know where you think it was.

Rounding Floats (1)

ChengWah (955139) | more than 7 years ago | (#15900622)

Floating point numbers are an approximation. So why would you use them for an exact answer? Duuuuhhhhhh

Floating Point Numbers are trouble... (1)

tlhIngan (30335) | more than 7 years ago | (#15900629)

They're convenient while programming, but they can certainly be a PITA to use properly. First they don't compare properly (you can't test equality), and if you have to do multiplatform programming transferring floats, they had better be stored in standard format (which can have a nasty side-effect of slowing down your floating point arithmetic since after each operation the unit has to return it back to IEEE format from machine native).

I've seen programmers who never realized these facts and had them ask why their code didn't work (they stored statistics gathered from a monitoring unit on an ARM Linux embedded board, transferred them to a PC, and had nonsensical results). Altering the way they serialized their floats and doubles fixed that issue. And nevermind that processing a float on different architectures can have slightly different results (or big results, depending on how you write your code). I guess some people treat floating point numbers the same way as integers, when they're more approximation than anything.

Or you can write code like this guy - http://thedailywtf.com/forums/thread/71883.aspx [thedailywtf.com]

The summary is nonsense (1)

gweihir (88907) | more than 7 years ago | (#15900633)

UIEEE754 specifies exact results. There is no room for interpretation. That means it is not the FPU we should be worried about but IEEE764. However I find that IEEE754 is quite well done.

On the other hand, people that programm with floats and do not know or understand IEEE754 are asking for trouble. But that is true with every type of library. Knowledge and insight can only be replaced by more knowledge and greater insight. El-Cheapo outsourcing to India or hiring people without solid CS skills as programmers is asking for trouble.

Re:The summary is nonsense (1)

gweihir (88907) | more than 7 years ago | (#15900646)

P.S.: None of the described problems are in any way new or surprising. This is a very old discussion. Any decent CS or numerics course will explain what is going on and how to get around it. Also, there are good long-number libraries out there (e.g. GNU mp) that allow you to get past these problems as well, to any precision you like.

We ALL know?? (1)

gwoodrow (753388) | more than 7 years ago | (#15900636)

I got to about the 3rd sentence before the hamster running on the wheel in my head fell off and, in a daze, started pooping everywhere. That happens anytime I read something math related. Good thing math is just a fad.

What are they teaching now? (1)

scuppy (929040) | more than 7 years ago | (#15900642)

Did they stop teaching this stuff in first year university? As for the person who posted: "Try representing 1/3 in any finite number of digits" well, you and I just did using 2 integers.

Re:What are they teaching now? (1)

Shadyman (939863) | more than 7 years ago | (#15900696)

I believe the poster meant "Try representing 1/3 in decimal form in any finite number of digits."

Again, that's easy, you use 0.3, with a 'repeating' bar over the 3, but that's beside the point.

This is why you would choose... (5, Informative)

jd (1658) | more than 7 years ago | (#15900643)

One of the many many solutions:


  • Fixed-point numbers
  • Berkeley MP or Gnu MP arbritary-length floating-point
  • Co-processors with truly massive internal registers (I refuse to use less than 80-bit)
  • Delayed calculation (ie: actually process a calculation at the end, storing the inputs and operators until you absolutely need the value - eliminates intermediate rounding errors and if the value is never needed, you don't waste the clock cycles)
  • Don't use real numbers - apply a scaler or a transform such that ALL components of any scaled/transformed calculations must be integer, then only transform back for display purposes


The use of transforms for handling numerical calculations is an old trick. It is probably best-known in its use as a very quick way to multiply or divide using logarithms and a slide-rule, prior to the advent of widely-available scientific calculators and computers. Nonetheless, devices based on logarithmic calculations (such as the mechanical CURTA calculator) can wipe the floor with most floating-point maths units - this despite the fact that the CURTA dates back to the mid 1940s.

Quick Way to Check FPU (1)

reporter (666905) | more than 7 years ago | (#15900648)

The article asks, "But do we ever stop to think what goes on inside that floating point unit and whether we can really trust it ?"

The second part of the question can be easily answered. Compile the computer program in two ways. First, set the compiler to not use the floating-point unit (FPU). Just generate the instructions for explicitly doing the floating-point computations in software. Run the compiled code and save the results.

Second, set the compiler to explicitly use the FPU. Generate FPU instructions to do the floating-point computations in hardware. Run the compiled code and save the results.

The results should be identical. If they are not identical, then either the compiler has a (software) bug or the FPU has a (hardware) bug. If you are using GCC without optimization, then the FPU probably has a hardware bug. GCC is quite reliable when it is used without any optimization.

Bah. Author doesn't understand arithmetic. (5, Insightful)

swillden (191260) | more than 7 years ago | (#15900660)

The author goes on and on about how floating point numbers are inaccurate, and unable to precisely represent represent real values, like this is something new, or even something different from the number approximations we normally use.

The reason the examples the author cites can't be represented precisely is that floating point numbers are ultimately represented as base-2 fractions, and there are a bunch of finite-length base-10 fractions that don't have a non-repeating base-2 representation. Guess what? We have *exactly* the same problem with the base-10 fractions that everyone uses all the time. Show me how you write 1/3 as a decimal!

The problem isn't that floating point numbers are inherently problematic, the problem is that we typically use them by converting base-10 numbers to them, doing a bunch of calculations and then converting them back to base 10. Floating point rounding isn't an unsolved problem -- floating point rounding works perfectly, and always has. It's just that the approximations you get when you round in base 2 don't match the approximations you get when you round in base 10.

Bottom line: If you care about getting the same results you'd get in base 10, do your work in base 10. This is why financial applications should not use floating point numbers.

Re:Bah. Author doesn't understand arithmetic. (1)

aXis100 (690904) | more than 7 years ago | (#15900694)

Or more importantly, do you math in a system where the level of precision (number of significant digits) is far greater than the answers you require.

Must read floating-point articles (2, Interesting)

emarkp (67813) | more than 7 years ago | (#15900674)

What every computer scientist should know about floating point numbers (HTML [sun.com] , PDF [loria.fr] ).

and

When bad things happen to good numbers [petebecker.com] (as well as Becker's other floating-point columns on that same page)

average joe (1, Interesting)

john_uy (187459) | more than 7 years ago | (#15900700)

pardon my ignorance but why does the problem exist today? can't it be fixed? what is the actual effect to us (since in the forum, the examples given in the article are false)? (links will be helpful.)

when i use my calculator, it doesn't give rounded off numbers. i suspect lots of programs will have problems with rounding off but i don't seem to notice it. is it that insignificant?

Re:average joe (2, Interesting)

Anonymous Coward | more than 7 years ago | (#15900727)

If you think your calculator doesn't give rounded off numbers, I hope you're not working in science or engineering.

Slight clarification (1)

The Cisco Kid (31490) | more than 7 years ago | (#15900711)

'Floating point numbers'
as opposed to
'Computer implementation of the storage and manipulation of floating point numbers'

Only the latter might be suspect, depending on the implementation.

Whatever happened to what used to be known as 'scientific notation' for what are also called 'real numbers'? Eg, you store the mantissa (eg "37") and the exponent (eg -2) and there is no approximation involved, although the mantissa might have a set maximum length, so you might have trouble storing, for example 1.000000000000000000000001 if the mantissa had a maximum length of ten.

Floating error (1)

Coppit (2441) | more than 7 years ago | (#15900724)

I remember an old question from grad school: Why might you add a bunch of floating point numbers starting with the smallest? The answer is because floating point decimal accuracy goes down the larger the characteristic (non-decimal) is, so add the small numbers to get a larger characteristic before you add the larger numbers in. If you add the big numbers first, adding the small numbers might not change the intermediate sum. Of course, real-world floating point error in numerical algorithms is much more subtle. On the other hand, software engineering errors are often not subtle at all, so maybe things balance out. :)

Comp Sci 101 (4, Informative)

syousef (465911) | more than 7 years ago | (#15900737)

Welcome to a very poor article on what's been taught in early Comp Sci for many many years.

Any serious developer of business software knows all about this and avoids floating point at all cost for financial calculations. Scientists however do use them carefully since the math they do is usually much more performance (speed) sensitive and the calculations are a little more complex than what tends to be done on the business side (ie _most_ business calcs are relatively simple).

Hmm (0)

Anonymous Coward | more than 7 years ago | (#15900779)

I found this out years ago, and took it so far as to try to find a solution to the problem. I took it on as part of my senior thesis in college and worked to develop a way to, essentially, create a class that stored numbers using a numerator and denominator which could be of invariable size (think BigInteger numerator, BigInteger denominator, but using C++).

The drawbacks include that it is somewhat complex, most especially with the division (and somewhat, though to a lesser degree, multiplication), as well that it is not as efficient as there is not as much hardware support (you don't have the floating-point hardware support, since you're operating on multiple integers instead).

The true fallback is that only a fixed quantity of numbers can be finitely represented; those that cannot are forced to a fixed memory-width (say, 32, 64, or 80-bits, for example). Due to this, there is guaranteed an inaccuracy in the storage (forced rounding/truncation) when the number is not finitely represented.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...