Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

The Trouble With Rounding Floats 456

lukfil writes "We all know of floating point numbers, so much so that we reach for them each time we write code that does math. But do we ever stop to think what goes on inside that floating point unit and whether we can really trust it?"
This discussion has been archived. No new comments can be posted.

The Trouble With Rounding Floats

Comments Filter:
  • Decimal Arithmetic (Score:4, Insightful)

    by (1+-sqrt(5))*(2**-1) ( 868173 ) <1.61803phi@gmail.com> on Sunday August 13, 2006 @10:42PM (#15900491) Homepage
    From TFA:
    Example 1: showing approximation error.

    // some code to print a floating point number to a lot of
    // decimal places
    int main()
    {
    float f = .37;
    printf("%.20f\n", f);
    }
    The main problem with that example, I take it, is that single-precision datatypes are only guaranteed for roughly seven decimal places; using double, of course, only defers the problem.

    What about encoding floats as a pair of ints or longs: one to express the numerical value, and the other its tenth power; id est, decimal arithmetic [ibm.com]?

    • by Anonymous Coward on Sunday August 13, 2006 @10:54PM (#15900528)
      This is not newsworthy. This is computer science 101.
      • by Duhavid ( 677874 )
        It is a little newsworthy.

        I bothered to ask the question of what to use for monitary
        usage at a financial institution in my recent past. I was
        a bit ( pardon the pun ) suprised to get a blank stare, to
        have to explain what I was talking about. Floats where good
        enough. Course, I had a problem in .net with iterating thru
        a list of values ( testing, each was .1, for 10% ), and the
        sum wasnt 1.0. Had to do a bunch of

        decimal.parse(value.ToString())

        to get things to sum up correctly.
        • Floats where [sic] good enough. [...] Had to do a bunch of
          decimal.parse(value.ToString())
          to get things to sum up correctly.
          Oh god. Please tell us which financial institution you worked for so we can all avoid it like the plague.
          • by innosent ( 618233 ) <jmdority.gmail@com> on Monday August 14, 2006 @12:49AM (#15900844)
            For the uneducated, the reason that this is stupid is that IEEE-754 floating point numbers cannot REPRESENT all values, they APPROXIMATE them. There is no way to properly represent the value 0.01 as a float (0.01 is best approximated by 3C23D70A, or 9.9999998e-3). So, for instance, if you were to add up 100 pennies, you would have 99.999998 cents, not 100. Repetitive additions (like credits and debits from an account) or multiplications (interest calculations, amortizations, etc.) simply make the problem worse, which is why floats should NEVER be used to track money. A fixed decimal system should always be used for financial systems.
          • Occasionally I see stuff like this in the real world. For example, at a bar I was in once, the debit machine which received input from the cash register had a difference of 1 cent from the bill calculated in the register. I asked them what was up with that. They said something like "yeah, that happens every once in a while". To me it seemed obvious that whoever did the coding for the interface didn't have a clue about floating point rounding errors. So I tend to agree with the grandparent post... it se
          • by StressedEd ( 308123 ) <ej.graceNO@SPAMimperial.ac.uk> on Monday August 14, 2006 @04:20AM (#15901256) Homepage
            I suspect you will end up having to avoid most of them.

            Friends of mine went off to work "In The City", when I quizzed them about their use of numbers for stock prices etc they were equally dismayed that things were being passed around as doubles. Often encoded as ASCII text in data streams as well, requiring different people to write their own ASCII->DOUBLE conversion depending on the representation of the stock tick. I think this kind of madness is quite prevelant.

            As someone else pointed out, if you want to do things properly you can end up needing very big integers.

            Perhaps the best option is to make sure people can only by and sell equities etc in numbers that can be exactly represented as doubles on a computer. It sounds crazy, but it's not as crazy as it looks. One of the reasons stocks etc are quoted as they are is probably due to the ease of the mental arithmetic.

            Kudos to the parent of your post. At least he knows what he is having to do is dodgy and cares enough to check!

            • by johnw ( 3725 ) on Monday August 14, 2006 @05:54AM (#15901459)
              Friends of mine went off to work "In The City", when I quizzed them about their use of numbers for stock prices etc they were equally dismayed that things were being passed around as doubles.

              I'd be not only dismayed but very surprised to find anything which interfaces to the London Stock Exchange passing stock prices around as doubles, or as any other kind of floating point number.

              The LSE feeds all use 18 digits for values, with the first 10 being implicitly before the decimal point and the remaining eight being after the implicit decimal point. This is very handy because it means all the values can be manipulated using 64 bit integers. The LSE rules also state very precisely how rounding must be handled. If you try to submit a multi-million pound deal and your calculation of the consideration is out by just one penny then the deal will be rejected.

              No-one with the slightest clue about how to code would use floating point maths in any kind of financial program, particularly not one where they're working with the LSE.
        • I bothered to ask the question of what to use for monitary
          usage at a financial institution in my recent past.


          http://en.wikipedia.org/wiki/Packed_decimal [wikipedia.org]

          All CISC CPUs had opcodes to do the work, but AFAICT only COBOL (being, of course, a Business Oriented Language) implemented BCD as a primary data type.

          Damned shame, too, since it eliminates all the hassle of working with financial software.
      • by Bender0x7D1 ( 536254 ) on Sunday August 13, 2006 @11:31PM (#15900649)
        Exactly. Unfortunately, there are too many people out there who are programmers, even good ones, who don't know, or understand, the basics. While I'm not claiming that formal education is the only way to get the knowledge you need, it is a good way to avoid gaps in your knowledge. I hated some of the computer science classes I had to take, but I did learn something important in each and every one of them.

        Another advantage in the formal classes is you get the theory that allows you to make decisions on what data types to use and when. Sometimes you need the precision of BigNum systems, (crypto for example), and sometimes the accuracy of float is enough. For example, in a lot of financial applications, float would be good enough since 2 decimal places is enough. If you need performance, float will beat any BigNum system hands down. However, if you are dealing with decimals on top of decimals, (such as calculating someone's dividend from a mutual fund where they own partial shares), you might need BigNum. Either way, with the proper theory and good understanding of the formats, you can make these decisions.

        These situations are why I am a big supporter of actual software engineering instead of programming. Sure, standard programming is great for a lot of situations, but serious applications need to use software engineering practices. You wouldn't build a bridge without an engineer, so why build an application that handles billions of dollars without applying the same rules and principles?
        • by gweihir ( 88907 )
          These situations are why I am a big supporter of actual software engineering instead of programming. Sure, standard programming is great for a lot of situations, but serious applications need to use software engineering practices. You wouldn't build a bridge without an engineer, so why build an application that handles billions of dollars without applying the same rules and principles?

          I could not agree more. The issue is not to get it done fast or cheap. The issue is that the person designing the solution d
        • For example, in a lot of financial applications, float would be good enough since 2 decimal places is enough.

          No, if anything has to add up, then it's BigNums only. Financial apps grow, and choosing doubles to store your money is just asking for trouble.

      • I wouldn't call it CS101. In CS101 you learn, of course, that computers are finite machines and can't hold infinite values exactly (ex PI). Going into the precision detail would require a course in Numerical Analysis [wikipedia.org]. Since you know there will always be some error, one of the components of Numerical Analysis is figuring out what the error will be. From that you can determine what error threshhold is acceptable and perform the calculations.
      • The number of people who really understand floats is less than 1% of the people who think they do.

        Do you understand that

        (A < B)

        is not the same as

        !(A >= B)

        and that

        ((A + 1) == (A))

        Can be true?

        Every day, many people make the mistake of using floats when wat they really wanted was the ability just to represent large numbers. For example, in Mac OS X, the system uses doubles as representations of time. This is the worst idea I can think of. First of all, floats are imprecise and time is the thing tha

    • Since these are computers, and they deal primarily with binary internally, why not store the numerical value and the 2nd power instead? Oh, and since we generally need more bits of accuracy in the numerical value than the exponent (do you often deal with numbers 2**(2**32)?), why not allocate a "reasonable" number of bits to the exponent and leave more for the numerical value.

      Uh oh, we just re-invented floating point. Oh well, nice try.

      If you were just trying to get better accuracy by using base 10 rather t
      • Since these are computers, and they deal primarily with binary internally

        Last I checked, they use binary internally exclusively, not primarily. ;-)

        Unless things have changed and nobody told me. :-P

        Cheers
    • by SageMusings ( 463344 ) on Sunday August 13, 2006 @11:20PM (#15900610) Journal
      Okay,

      Show of hands: Who did not already understand that floats are approximations? Anyone? I didn't think so. I've gotta wonder why this story ever made it into Slashdot. This is more worthy of Time magazine where it can be spun as a startling new revelation into the dirtier corners of computer science and foisting a lie on the public.

      • I have to agree. If I did not know this in high school, I certainly knew it in the first week or two of the first CompSci class I took my first semester at school. Not trying to be snotty, but this is really obvious stuff in the CompSci world.
      • The link text is a misnomer: "what goes on inside that floating point unit"

        I was expecting some information about a FPU unit - parallel processing, pipelining and all that.

        But it links to: "The trouble with rounding floating point numbers"

        Kind of shallow...

    • What about encoding floats as a pair of ints or longs: one to express the numerical value, and the other its tenth power

      Old news. Of course that is the way to do it if you need exact decimals. If you have a limited range, then you can also just use one int and a fixed exponent, i.e. fixed-point arithmetric. Ise a long-number package (e.g. GNU mp) if you need more precision.

      The whole article is about a very old and very well known and understood porblem. My guess is the real problem is the quality of the pro
    • They kinda are...

      IEEE floats are encoded as binary data, that is, as a base-2 fixed-point number. We first assume that the first bit (the only one before the decimal (binimal?) point) is 1. We can assume this because, in base 2, a properly aligned SCI number will have 1 as its first digit. As such, we don't have to store it.

      The later digits represent sucessive divisions of base-2: 1/2, 1/4, 1/8, 1/16, etc. There is also stored the shift.

      So, basically, they're stored as 1.BBBB x 2^N

      Is this the most effic
    • by eliot1785 ( 987810 ) on Sunday August 13, 2006 @11:50PM (#15900708)
      This is why I use DECIMAL and not FLOAT in MySQL. Problem solved. I'm not a big fan of floats, the extreme precision that they seem to have is mostly an illusion.
      • Dunno why that was modded funny. It's somewhat true.

        The potential error in any float is 1/(2^N) where N is the number of bits used to store the significant digits (called the mantissa)... just like the potential error in a written decimal number is 1/(10^N).

        So? Well, for stuff where you need standard (N=3) float-error, use N=10 for binary. You won't find this, mind you.

        Just for reference:
        IEEE 754 (standard floating point numbers) in 32-bit uses 23 bits for its mantissa, and 8 bits as its exponent. Maximu
      • Up until MySQL 5.0 calculations with DECIMALs were still done as DOUBLEs, so you could get unexpected results.
  • by Not The Real Me ( 538784 ) on Sunday August 13, 2006 @10:50PM (#15900516)
    This is why I use the decNumber library from IBM.

    http://www2.hursley.ibm.com/decimal/decnumber.html [ibm.com] The decNumber library implements the General Decimal Arithmetic Specification[1] in ANSI C. This specification defines a decimal arithmetic which meets the requirements of commercial, financial, and human-oriented applications.

    The library fully implements the specification, and hence supports integer, fixed-point, and floating-point decimal numbers directly, including infinite, NaN (Not a Number), and subnormal values.

    The code is optimized and tunable for common values (tens of digits) but can be used without alteration for up to a billion digits of precision and 9-digit exponents. It also provides functions for conversions between concrete representations of decimal numbers, including Packed Decimal (4-bit Binary Coded Decimal) and three compressed formats of decimal floating-point (4-, 8-, and 16-byte).
    • by piranha(jpl) ( 229201 ) on Monday August 14, 2006 @12:01AM (#15900740) Homepage

      Rational number arithmetic is a more general solution. Any number that can be expressed in decimal or floating-point notation is rational; any rational number can be expressed as (n/d), where n and d are integers. We have "bigints;" unbounded-magnitude integers constrained only by the memory of the computer they are stored on. Rational numeric data types pair two bigints together to give you unbounded magnitude and precision, and have been implemented for decades.

      They probably aren't directly supported in your favorite programming language because they are slow to work with when you need very high precision; after each calculation, the rational number needs to be reduced to its lowest terms. This involves factoring, which takes time proportional to the the terms themselves.

      Consider the use of integers, floats, or decimals only as an optimization when it has been shown that an application is suffering a serious performance hit because of rational arithmetic, and when you can use a faster data type knowing that your program will perform within accuracy goals.

      For 90% of computing problems, monetary calculations included, you shouldn't even have to worry about what numeric type you're using. Your language should assume rationals unless told otherwise. Common Lisp, Scheme, and Nickle do exactly that.

      C developers can use GMP [swox.com]. Other developers can use one of many bindings to GMP.

  • by Anonymous Coward
    If you are actually concerned about rounding and precision, use decimal [ibm.com] instead.
  • by www.sorehands.com ( 142825 ) on Sunday August 13, 2006 @10:57PM (#15900540) Homepage
    I am Intel of Borg, you will be approximated.

    There have been many examples, such as the original pentium bug. Of course, there was a bug in Windows Calc, it was 2.01 - 2.0 = 0 (If I remember correctly).

  • by mlyle ( 148697 ) on Sunday August 13, 2006 @10:57PM (#15900542)
    Apparently the author of the article didn't read the stories in RISKS that he cited. In particular, the 'pensioners being shortchanged' one talks about them not being paid interest on 'float'-- cash flow on transactions in progress. This has little to do with floating point numbers.

    Similarly, the spacecraft problem mentioned is one of an errant cast, not because of dilution of precision in floating point calculations.

    The author could really pick his examples better-- as mistakes in numerical programming happen often and are often of great import.
    • If we think back to the good old days of the first Gulf War and all that, we might remember the Patriot missile and what a dismal failure that was. Part of the problem there was that the missile's clock values were such that they would not convert to base 2 (and hence to float) accurately and so the tracking was off and lots of expensive misses happened. If you recall, lots of US soldiers died when a Scud that theoretically ought to have been shot down hit their barracks.

      As usual, it's not just one thing
      • by codegen ( 103601 ) on Monday August 14, 2006 @12:20AM (#15900774) Journal
        Part of the problem there was that the missile's clock values were such that they would not convert to base 2 (and hence to float) accurately and so the tracking was off

        Actually the problem was that they used a float to store the system time (time since power on) in the ground radar unit. It allowed the clock to be used in calculations without a conversion. A float will store an integer just fine (and accurately) until the number gets too large and then the units part drops off the bottom of the precision and the increment operator no longer makes any sense. This was a design decision that made sense for the role for which the missle platform was originally designed. The patriot was originally designed to be used in the European Theater (if the cold war ever turned hot) and as such would never remain in one location for more than a very few days.The clock is reset everytime they move the battery (they power off the ground tracking radar when they move). The use in the gulf war was in a strategic role (not tactical) which kept them continuously operating in a single location for long periods of time, and the shortcut they used came back to haunt them (as usual). If they had reset the system every few days, the problem would not have occured.

  • Not news. (Score:5, Insightful)

    by SJasperson ( 811166 ) on Sunday August 13, 2006 @10:58PM (#15900545)
    This is not a new problem. Or an unsolved one. Is there any modern programming language that does not supply a data type or library with exact decimal arithmetic support? Using a float to represent monetary amounts and expecting them to be free of rounding errors is as stupid as using integers to store zip codes and wondering where the leading zeros went from all the addresses in New England. If you can't be arsed to choose the right data type get out of the business.
    • This happens more often than you might think. I used to work for a large American software corp. One of their products which I had the pleasure of being on one of two teams who were working on it, had an accounting portion that used floating point columns in our DB schema and in code. I just was kind of disgusted and a little disturbed after finding this software's used by some pretty large organizations to track assets, etc.

      This reminds me of something someone I knew said once: you don't really have to be
    • Using a float to represent monetary amounts and expecting them to be free of rounding errors is as stupid as using integers to store zip codes and wondering where the leading zeros went from all the addresses in New England

      For those of us who aren't programming geniuses- what would you use to store a monetary amount, besides a floating-point format?

      • For those of us who aren't programming geniuses- what would you use to store a monetary amount, besides a floating-point format?

        In databases? Currency formats. Which are specifically designed not to lead to rounding errors. Some of them even allow you to specify the number of places after the decimal.

        In code? A numeric type designed for currency work (typically an add-in library). In a pinch you can use a 32bit integer and use the last 2 decimal digits as "cents", but you'll run the risk of overfl
        • In code? A numeric type designed for currency work (typically an add-in library). In a pinch you can use a 32bit integer and use the last 2 decimal digits as "cents", but you'll run the risk of overflow.

          In my case, only on the negative side :-)
      • For those of us who aren't programming geniuses- what would you use to store a monetary amount, besides a floating-point format?

        Without the use of any libraries? Integers -- just use cents as the base unit of currency, and convert to dollars strictly on input and display.

        If you're dealing with amounts of cents that could possibly start overflowing even a 32-bit int (that is, billions of cents, or tens of millions of dollars), then the application's important enough to be worth the cost of further resea

        • Re:Not news. (Score:3, Informative)

          If you're dealing with amounts of cents that could possibly start overflowing even a 32-bit int (that is, billions of cents, or tens of millions of dollars), then the application's important enough to be worth the cost of further research on the matter.

          ...especially when you can find roughly 10 gazillion alternative for about $1 worth of research time.

          Unfortunately, most of the obvious alternatives are either somewhat restrictive, or have relatively poor performance. For example, on a 64-bit machine

      • For those of us who aren't programming geniuses- what would you use to store a monetary amount, besides a floating-point format?

        E.g. long long with cents as the unit. Gives you a maximum value of 180 000 000 000 000 000 full currency units. Should be enough for most apps and gives you exact calculations. And takes only 64 bit,
        same as double.

      • Re:Not news. (Score:2, Informative)

        by Duhavid ( 677874 )
        Binary Coded Decimal would be one.

        I looked for a page that described the advantages
        of BCD, but I could not find one. So I'll have a
        stab at it myself. Basically, while slower, BCD
        can maintain arbitrary precision. If you have
        monitary items and you have a good handle on the
        range of values, you can store and operate on these
        values without any rounding losses at all.
      • For those of us who aren't programming geniuses- what would you use to store a monetary amount, besides a floating-point format?

        Decimal data types. In COBOL or PLI (in which most of these applications are written in) you use a PIC data type.For example,

        PIC 9999V999

        says the number has 4 integer digits and 3 fractional digits. It also may not hold a negative number. You add an S character to the front to allow negative numbers.

        The language runtime interprets the numbers and there is no approxim

    • Did anyone claim it is a new problem? Of course it will be new to some people and old hat to others. As soon as I saw the title I knew what it woudl be about, but there are always new programmers coming through the system who don't know this stuff.
    • by mcrbids ( 148650 ) on Sunday August 13, 2006 @11:34PM (#15900654) Journal
      Using a float to represent monetary amounts and expecting them to be free of rounding errors is as stupid as using integers to store zip codes and wondering where the leading zeros went from all the addresses in New England.

      Hrrmm, well...

      That would explain our lack of customer response in New England...
    • Using a float to represent monetary amounts and expecting them to be free of rounding errors is as stupid as using integers to store zip codes and wondering where the leading zeros went ...

      It's not stupid, it's just ignorant.

      Your comparison with integer zip codes is totally bogus: that's not an arithmetic error, it's just sloppy formatting. The variable contains a perfectly accurate value — you just have to remember to output it with a %05d format. (Of course, since you don't do arithmetic on zip co

  • science; business (Score:5, Insightful)

    by bcrowell ( 177657 ) on Sunday August 13, 2006 @11:04PM (#15900563) Homepage

    He talks about scientific applications, but actually very few scientific calculations are sensitive to rounding error. Remember, they sent astronauts to the moon using slide rules. Generally for scientific applications, you just don't want to roll your own crappy subroutines for stuff like matrix inversion; use routines written by people who know what they're doing. (And know the limitations of the algorithm you're using. For example, there are certain goofy matrices that will make a lot of matrix inversion algorithms blow chunks.)

    For business apps, the classic solution was to use BCD arithmetic. But today, is it more practical (and simple) just to use a language like Ruby, that has arbitrary-precision integers, so you can just store everything in units of cents? A lot of machines used to have special BCD instructions; do those exist on modern CPUs?

    • you'll be delighted to know your 80386 or later x86 supports packed and unpacked binary coded decimal. incidently, matrix inversion is a crappy way to solve linear systems, there's much better ways that don't cause tiny approximation errors to magnify many-fold.
    • But today, is it more practical (and simple) just to use a language like Ruby, that has arbitrary-precision integers, so you can just store everything in units of cents?

      Hmm.... if you use integers of any given finite precision, aren't you still subjecting yourself to round-off error? (e.g. ((int)4)/((int)3) == 1!!) On the other hand, if you use a string-based infinite-precision datatype, what happens when you try to compute an non-terminating number (e.g. 1.0/3.0)? Perhaps your program crashes after tr

  • by Null Nihils ( 965047 ) on Sunday August 13, 2006 @11:10PM (#15900583) Journal
    float (and the big brother double) is inaccurate. Its no surprise. A 32-bit Float is but a single simple tool in a programming language. If anyone is surprised by how Floats behave then they are, most likely, inexperienced.

    You don't start addressing a problem in software just by assuming Float or Double will magically fill every need. An experienced programmer needs to have a knowledge of how to use, and how not to use, the programming tools at hand. TFA about floating point numbers is very introductory (at the end it mentions that the next article will tell us how to "avoid the problem"... I assume it will go on to cover some basic idioms.) In a way it misses the point: Floating-point rounding is not a "problem". Floats and Doubles always do their job, but you have to know what that job is! The behaviour of floating point numbers should not be a big surprise to a seasoned coder.

    For example: You can't use float or double to store the numerical result of a 160-bit SHA-1 hash... you have to use the full 160 bits. (Duh, right?) So, if you use a mere 32 bits (float) or 64 bits (double) to store that number, you are going to sacrifice a lot of accuracy!
  • 1.125*59=66.375 in gnumeric. Yes, it rounds to 66.38. So it's not really a problem and you can beat the author to the punch by looking at the gnumeric source.

  • Numbers and bases (Score:5, Insightful)

    by Todd Knarr ( 15451 ) on Sunday August 13, 2006 @11:17PM (#15900600) Homepage

    We have the same problem in everyday numbers. Try representing 1/3 in any finite number of digits. You can't. The big thing about floating-point numbers that trips people up is that we're used to thinking in base 10. Floating-point numbers in computers typically aren't in base 10, they're in base 2. The rounding problem he describes is simply us getting confused and wondering why a fraction with an exact representation in base 10 doesn't have an exact representation in base 2. The obvious solution is the one he alludes to at the end: don't use base 2. Computers have had base-10 arithmetic in them for decades, in fact the x86 family has base-10 arithmetic instructions built in (the packed-BCD instructions). COBOL has used packed-BCD since it's beginning, which is why you don't find this sort of calculation error in ancient COBOL financial packages running on mainframes.

    • You know, your low user ID would imply you've been around a while, but you totally missed the point. The base 2 is not the only problem and you don't need to switch and/or use BCD to avoid these problems. Read the article again (or for the first time) and if you still don't get it look up "decimal" or "fixed-point" types in your favorite strongly-type programming language.
      • I think the GP poster has quite got the point tho. The problem isn't the IEEE-754 floating point, the problem is people not getting it right, in other words, people not understanding the binary precision of the mantissa when brought back to decimal. In school we were taught not to trust the IEEE-754 floats because it may represent 0.37 like 0.369999998, but the thing is that in real-life problems, well at least my real-life problems, that type of thing doesn't matter to me because my numbers wouldn't be par
    • Try representing 1/3 in any finite number of digits.

      0.3. All you need is base 9 :)
    • We have the same problem in everyday numbers. Try representing 1/3 in any finite number of digits. You can't.

      That one is real easy: Don'r use a "real" format, i.e. floats. Use a "rational" format, i.e. two integers and the value is the one divided by the other. This is one of the standard formats in the Gnu multiprecision library.

      Should be obvious really from standard shool mathematics, float approcimates R, but Q can be done exactly and is wherever needed and at least somebody has elementary mathematics sk
    • Try representing 1/3 in any finite number of digits. You can't.

      "1/3"

      You can. I just did. So did you. In base-10, even. In fact, the answer is the same for base-4 or higher. Using only two digits, "1" and "3". Any rational number can be represented using a finite number of digits, using... (wait for it) a RATIO.

      (Represent one-third in Base 2? why that would be "1/11". One-third in Base 3 would be "1/10".)
  • by Animats ( 122034 ) on Sunday August 13, 2006 @11:22PM (#15900618) Homepage

    Due to the efforts of Willam Kahan [berkeley.edu] at U.C. Berkeley, IEEE 754 floating point, which is what we have today on almost everything, is far, far better than earlier implementations.

    Just for starters, IEEE floating point guarantees that, for integer values that fit in the mantissa, addition, subtraction, and multiplication will give the correct integer result. Some earlier FPUs would give results like 2+2 = 3.99999. IEEE 754 also guarantees exact equality for integer results; you're guaranteed that 6*9 == 9*6. Fixing that made spreadsheets acceptable to people who haven't studied numerical analysis.

    The "not a number" feature of IEEE floating point handles annoying cases, like division by zero. Underflow is handled well. Overflow works. 80-bit floating point is supported (except on PowerPC, which broke many engineering apps when Apple went to PowerPC.)

    Those of us who do serious number crunching have to deal with this all the time. It's a big deal for game physics engines, many of which have to run on the somewhat lame FPUs of game consoles.

  • They're convenient while programming, but they can certainly be a PITA to use properly. First they don't compare properly (you can't test equality), and if you have to do multiplatform programming transferring floats, they had better be stored in standard format (which can have a nasty side-effect of slowing down your floating point arithmetic since after each operation the unit has to return it back to IEEE format from machine native).

    I've seen programmers who never realized these facts and had them ask wh
  • UIEEE754 specifies exact results. There is no room for interpretation. That means it is not the FPU we should be worried about but IEEE764. However I find that IEEE754 is quite well done.

    On the other hand, people that programm with floats and do not know or understand IEEE754 are asking for trouble. But that is true with every type of library. Knowledge and insight can only be replaced by more knowledge and greater insight. El-Cheapo outsourcing to India or hiring people without solid CS skills as programme
    • P.S.: None of the described problems are in any way new or surprising. This is a very old discussion. Any decent CS or numerics course will explain what is going on and how to get around it. Also, there are good long-number libraries out there (e.g. GNU mp) that allow you to get past these problems as well, to any precision you like.

  • by jd ( 1658 ) <imipak@ y a hoo.com> on Sunday August 13, 2006 @11:30PM (#15900643) Homepage Journal
    One of the many many solutions:


    • Fixed-point numbers
    • Berkeley MP or Gnu MP arbritary-length floating-point
    • Co-processors with truly massive internal registers (I refuse to use less than 80-bit)
    • Delayed calculation (ie: actually process a calculation at the end, storing the inputs and operators until you absolutely need the value - eliminates intermediate rounding errors and if the value is never needed, you don't waste the clock cycles)
    • Don't use real numbers - apply a scaler or a transform such that ALL components of any scaled/transformed calculations must be integer, then only transform back for display purposes


    The use of transforms for handling numerical calculations is an old trick. It is probably best-known in its use as a very quick way to multiply or divide using logarithms and a slide-rule, prior to the advent of widely-available scientific calculators and computers. Nonetheless, devices based on logarithmic calculations (such as the mechanical CURTA calculator) can wipe the floor with most floating-point maths units - this despite the fact that the CURTA dates back to the mid 1940s.

    • by philipgar ( 595691 ) <pcg2&lehigh,edu> on Monday August 14, 2006 @12:38AM (#15900826) Homepage
      logarithmic number systems (LNS) for computers were first proposed by Marasa and Matula in 1973, as a "better" approximation of numbers than floating point units. This paper compared the cumulative error from different floating point standards with LNS standards. LNS offers some advantages over floating point, however it's performance degrades significantly as you add more bits of precision.

      LNS can be effective to around 24bits of precision, and then the hardware requirements for the LNS unit's adder/subtracter become too overwhelming. This is because multiplications and divisions are fast on LNS units (with minimal hardware) as just require an adder, however handling subtraction is much more difficult. The simplest (naive) methods of making an adder and subtractor involve using large ROM lookup tables. Fancier, more efficient units using smaller roms and small multipliers to help get better values (I don't remember all the details offhand). Sometimes they'll even trade precision for faster performance. This can result in chips with single cycle multiplies and divides, but multi-cycle additions and subtractions. For low precision calculations requiring many divides and multiplies LNS processors can often achieve the best performance. However for many applications an efficient LNS unit with sufficient precision just isn't practical.

      Phil
  • The article asks, "But do we ever stop to think what goes on inside that floating point unit and whether we can really trust it ?"

    The second part of the question can be easily answered. Compile the computer program in two ways. First, set the compiler to not use the floating-point unit (FPU). Just generate the instructions for explicitly doing the floating-point computations in software. Run the compiled code and save the results.

    Second, set the compiler to explicitly use the FPU. Generate FPU in

  • by swillden ( 191260 ) * <shawn-ds@willden.org> on Sunday August 13, 2006 @11:35PM (#15900660) Journal

    The author goes on and on about how floating point numbers are inaccurate, and unable to precisely represent represent real values, like this is something new, or even something different from the number approximations we normally use.

    The reason the examples the author cites can't be represented precisely is that floating point numbers are ultimately represented as base-2 fractions, and there are a bunch of finite-length base-10 fractions that don't have a non-repeating base-2 representation. Guess what? We have *exactly* the same problem with the base-10 fractions that everyone uses all the time. Show me how you write 1/3 as a decimal!

    The problem isn't that floating point numbers are inherently problematic, the problem is that we typically use them by converting base-10 numbers to them, doing a bunch of calculations and then converting them back to base 10. Floating point rounding isn't an unsolved problem -- floating point rounding works perfectly, and always has. It's just that the approximations you get when you round in base 2 don't match the approximations you get when you round in base 10.

    Bottom line: If you care about getting the same results you'd get in base 10, do your work in base 10. This is why financial applications should not use floating point numbers.

  • by emarkp ( 67813 ) <[moc.qdaor] [ta] [todhsals]> on Sunday August 13, 2006 @11:39PM (#15900674) Journal
    What every computer scientist should know about floating point numbers (HTML [sun.com], PDF [loria.fr]).

    and

    When bad things happen to good numbers [petebecker.com] (as well as Becker's other floating-point columns on that same page)

  • average joe (Score:2, Interesting)

    by john_uy ( 187459 )
    pardon my ignorance but why does the problem exist today? can't it be fixed? what is the actual effect to us (since in the forum, the examples given in the article are false)? (links will be helpful.)

    when i use my calculator, it doesn't give rounded off numbers. i suspect lots of programs will have problems with rounding off but i don't seem to notice it. is it that insignificant?
    • Re:average joe (Score:2, Interesting)

      by Anonymous Coward
      If you think your calculator doesn't give rounded off numbers, I hope you're not working in science or engineering.
    • Re:average joe (Score:3, Interesting)

      by dcollins ( 135727 )
      "when i use my calculator, it doesn't give rounded off numbers."

      Not true.

      In the math class I teach, I do the following: have everyone take a calculator and do "2/3".
      Half of the calculators say this: "0.666666666" (rounded down).
      Half of the calculators say this: "0.666666667" (rounded up).

      In truth, an exact answer requires an infinite sequence of "6"'s. The calculator (or any computer) must decide whether to round up or down to fit it into its display space (or memory). You always have some round-off error -
  • 'Floating point numbers'
    as opposed to
    'Computer implementation of the storage and manipulation of floating point numbers'

    Only the latter might be suspect, depending on the implementation.

    Whatever happened to what used to be known as 'scientific notation' for what are also called 'real numbers'? Eg, you store the mantissa (eg "37") and the exponent (eg -2) and there is no approximation involved, although the mantissa might have a set maximum length, so you might have trouble storing, for example 1.00000000000
  • I remember an old question from grad school: Why might you add a bunch of floating point numbers starting with the smallest? The answer is because floating point decimal accuracy goes down the larger the characteristic (non-decimal) is, so add the small numbers to get a larger characteristic before you add the larger numbers in. If you add the big numbers first, adding the small numbers might not change the intermediate sum. Of course, real-world floating point error in numerical algorithms is much more su
  • Comp Sci 101 (Score:5, Informative)

    by syousef ( 465911 ) on Monday August 14, 2006 @12:01AM (#15900737) Journal
    Welcome to a very poor article on what's been taught in early Comp Sci for many many years.

    Any serious developer of business software knows all about this and avoids floating point at all cost for financial calculations. Scientists however do use them carefully since the math they do is usually much more performance (speed) sensitive and the calculations are a little more complex than what tends to be done on the business side (ie _most_ business calcs are relatively simple).
  • by Opportunist ( 166417 ) on Monday August 14, 2006 @12:26AM (#15900796)
    And I'm not sure if it can be solved altogether. When you spend a little time meditating over the IEEE 754, you notice a few flaws. The first and most obvious is, of course, that, no matter how precise you want to make it, somewhere there's a cutoff. And, especially when you multiply with floats, that error grows as well. But there's another problem. Two actually.

    The first one is the one mentioned in the article, and something everyone who didn't sleep through his IT classes should know: Computers calculate binary, and converting floats from binary to decimal isn't possible without error. There is no way to represent 0.37 in binary, in IEEE754. No matter how many bits you spend on the mantissa. Now, you can argue that, if you make it "big enough", it doesn't matter anymore since it's well within the error margin and when you round it to, say, 5 after decimal, the error vanishes. True. But when you start calculating, when you multiply or, worse, exponentiate, the error grows in big leaps.

    Another, less obvious, problem is hidden underneath the way the IEEE754 works: Your error grows as your numbers grow. This might seem obvious, but it is interesting how many people overlook this flaw and problem in everyday life. Since according to the IEEE754 standard, real numbers are stored as exponent and mantissa, if you're dealing with BIG numbers, a fair deal of your mantissa is spent on the "pre-comma" part of your number, so you're losing precision. You can't reliably say that "a double is good for 5 behind dot, no matter what", you have to take into account how many of those precious mantissa bits are spent before you even get to ponder what's left for your precision.

    This isn't so much a problem of processors. It's a problem of people understanding how their processors work.
  • by Myria ( 562655 ) on Monday August 14, 2006 @12:33AM (#15900812)
    GIMPS [mersenne.org] looks for Mersenne primes. This is clearly an exact integer operation. However, for speed, they use Fast Fourier Transforms [wikipedia.org] to do the big squaring operation with floating point. Obviously, they need an exact result.

    The trick is to carefully calculate exactly how much error each operation can generate. It is possible to know exactly how many bits of your result contain valid information. If you need more accuracy, you can split it into multiple operations. As long as the final accumulated error in their result is less than .5, you have the integer answer they need. Note that it's basically impossible to do this without using assembly language, because the order of operations and subexpression elimination definitely matter.

    Another interesting problem occurs with floating point results. You cannot expect the complete answer to be exactly identical on all machines. Even on the same machine, compiler settings affect the answer: x87 differs significantly from SSE. If you are doing something that needs bitwise identical results on all machines, you need to either implement it with integer math, or do what GIMPS does and do error tracking.

    Melissa
    • by ponos ( 122721 ) on Monday August 14, 2006 @05:16AM (#15901368)
      This is clearly an exact integer operation. However, for speed, they use Fast Fourier Transforms to do the big squaring operation with floating point. Obviously, they need an exact result.
      All serious bignum libraries use (or should use!) the FFT to multiply very big numbers. This has been studied extensively (see Knuth The Art of Programming vol 2, for example) and is the fastest way to multiply. The general idea is that after the transform you can multiply in O(N), which is much faster than the naive O(N*N) one would expect from a simplistic digit-by-digit approach.

      P.

  • rounding algorithms (Score:4, Informative)

    by trb ( 8509 ) on Monday August 14, 2006 @01:04AM (#15900871)
    If you're interested in rounding (and who isn't?) you might want to read An introduction to different rounding algorithms. [pldesignline.com]
  • by gatkinso ( 15975 ) on Monday August 14, 2006 @09:27AM (#15902090)
    ...THAT should keep me safe from thems nasty float errors!
  • by ChrisA90278 ( 905188 ) on Monday August 14, 2006 @11:54AM (#15903216)
    I still have this textbook I got in 1971. It's called "Computer Science, A first course". It talks about this same exact problem or representation. If compared integer, floating and decimal representations.

    Why would this count as "news". Everyone who has to deal with this would already know about it.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...