Java Double value = 0.01 changes to 0.009999999999999787

21,810

Solution 1

Using double for currency is a bad idea, Why not use Double or Float to represent currency?. I recommend using BigDecimal or doing every calculation in cents.

Solution 2

0.01 does not have an exact representation in floating-point (and neither do 0.1 nor 0.2, for that matter).

You should probably do all your maths with integer types, representing the number of pennies.

Solution 3

doubles aren't kept in decimal internally, but in binary. Their storage format is equivalent to something like "100101 multiplied by 10000" (I'm simplifying, but that's the basic idea). Unfortunately, there's no combination of these binary values that works out to exactly decimal 0.01, which is what the other answers mean when they say that floating point numbers aren't 100% accurate, or that 0.01 doesn't have an exact representation in floating point.

There are various ways of dealing with this problem, some more complicated than others. The best solution in your case is probably to use ints everywhere and keep the values in cents.

Solution 4

As the others already said, do not use doubles for financial calculations.

This paper http://download.oracle.com/docs/cd/E19957-01/806-3568/ncg_goldberg.html (What Every Computer Scientist Should Know About Floating-Point Arithmetic) is a must-read to understand floating point math in computers.

Solution 5

Floating point numbers are never 100% accurate (not quite true, see comments below). You should never compare them directly. Also integer rounding. The best way to do this would probably be to do it in cents and convert to dollars later (1 dollar == 100 cents). By converting to an integer you are losing precision.

Share:
21,810
Alex
Author by

Alex

Updated on September 15, 2020

Comments

  • Alex
    Alex over 3 years

    Possible Duplicate:
    Why not use Double or Float to represent currency?

    I'm writing a basic command-line program in Java for my high school course. We're only working with variables right now. It's used to calculate the amount of bills and coins of whatever type in your change after a purchase. This is my program:

    class Assign2c {
        public static void main(String[] args) {
            double cost = 10.990;
            int paid = 20;
            double change = paid - cost;
            int five, toonie, loonies, quarter, dime, nickel, penny;
    
            five = (int)(change / 5.0);
            change -= five * 5.0;
    
            toonie = (int)(change / 2.0);
            change -= toonie * 2.0;
    
            loonies = (int)change;
            change -= loonies;
    
            quarter = (int)(change / 0.25);
            change -= quarter * 0.25;
    
            dime = (int)(change / 0.1);
            change -= dime * 0.1;
    
            nickel = (int)(change / 0.05);
            change -= nickel * 0.05;
    
            penny = (int)(change * 100);
            change -= penny * 0.01;
    
            System.out.println("$5   :" + five);
            System.out.println("$2   :" + toonie);
            System.out.println("$1   :" + loonies);
            System.out.println("$0.25:" + quarter);
            System.out.println("$0.10:" + dime);
            System.out.println("$0.05:" + nickel);
            System.out.println("$0.01:" + penny);
        }
    }
    

    It should all work but at the last step when there's $0.01 leftover, number of pennies should be 1 but instead, it's 0. After a few minutes of stepping into the code and outputting the change value to the console, I've found out that at the last step when change = 0.01, it changes to 0.009999999999999787. Why is this happening?

    • Carl Norum
      Carl Norum over 12 years
      You can't represent all decimal fractions correctly in binary. Use integers to do these operations, or handle the rounding yourself. This question (or a variant of it) has been asked here hundreds of times. Here's a good reference to check out: download.oracle.com/docs/cd/E19957-01/806-3568/…
    • Kevin
      Kevin over 12 years
      it's easier just to do mod :P
    • Martijn Courteaux
      Martijn Courteaux over 12 years
      Aaaand there we go again.....
    • mKorbel
      mKorbel over 12 years
      even I never used BigDecimal (because there I found difference to MsExcell, but that only my issue) please read [one thread from OTN][1], [1]: forums.oracle.com/forums/…
  • Oliver Charlesworth
    Oliver Charlesworth over 12 years
    It's not true to say that "floating point numbers are never 100% accurate".
  • Matt Ball
    Matt Ball over 12 years
    Doing all calculations in cents is almost always preferable to using BigDecimal, IMO.
  • Alex
    Alex over 12 years
    Wow. Had a feeling about this but I didn't know. I'll calculate this thing in cents then
  • Voo
    Voo over 12 years
    @Matt Ball That has the obvious problems of fixed point arithmetic, ie you either lose accuracy for not round numbers or need another solution anyhow. For the intermediary computations having only an accuracy of up to 1 cent sounds like a recipe for disaster.
  • joshx0rfz
    joshx0rfz over 12 years
    I guess I should say you can't reliably compare them?
  • cHao
    cHao over 12 years
    0.5 is 100% accurate. As are (or can be) most powers of 2, and most multiples thereof (as long as the mantissa can hold the numerator of the fraction). It's when you get to fractions where the denominator isn't a power of 2, or the numerator is too large to represent fully, that you run into rounding errors.
  • joshx0rfz
    joshx0rfz over 12 years
    Ahh yeah, you are correct. Thanks.
  • Java Ka Baby
    Java Ka Baby over 12 years
    +1 nice idea but not always applicable IMHO 'bigdecimal' is the way when break down is not possible .
  • Oliver Charlesworth
    Oliver Charlesworth over 12 years
    @Voo: For the application the user is doing, accuracy of one cent is perfectly fine.
  • Voo
    Voo over 12 years
    @Oli Charlesworth You mean an application with no real use? ;) Sure it may be fine, but it's always a good idea to know all advantages and disadvantages of a possible solution (especially if the only reason you're doing it is to learn something new). And calculating in cents is certainly not "almost always preferable" to BigDecimals imo - not if accuracy is important (and depending on what calculations you do the end result may vary by far more than 1 cent obviously). The only downside to bigdecimals I can see is performance