Why do I see a double variable initialized to some value like 21.4 as 21.399999618530273?
Solution 1
These accuracy problems are due to the internal representation of floating point numbers and there's not much you can do to avoid it.
By the way, printing these values at run-time often still leads to the correct results, at least using modern C++ compilers. For most operations, this isn't much of an issue.
Solution 2
I liked Joel's explanation, which deals with a similar binary floating point precision issue in Excel 2007:
See how there's a lot of 0110 0110 0110 there at the end? That's because 0.1 has no exact representation in binary... it's a repeating binary number. It's sort of like how 1/3 has no representation in decimal. 1/3 is 0.33333333 and you have to keep writing 3's forever. If you lose patience, you get something inexact.
So you can imagine how, in decimal, if you tried to do 3*1/3, and you didn't have time to write 3's forever, the result you would get would be 0.99999999, not 1, and people would get angry with you for being wrong.
Solution 3
If you have a value like:
double theta = 21.4;
And you want to do:
if (theta == 21.4)
{
}
You have to be a bit clever, you will need to check if the value of theta is really close to 21.4, but not necessarily that value.
if (fabs(theta - 21.4) <= 1e-6)
{
}
Solution 4
This is partly platform-specific - and we don't know what platform you're using.
It's also partly a case of knowing what you actually want to see. The debugger is showing you - to some extent, anyway - the precise value stored in your variable. In my article on binary floating point numbers in .NET, there's a C# class which lets you see the absolutely exact number stored in a double. The online version isn't working at the moment - I'll try to put one up on another site.
Given that the debugger sees the "actual" value, it's got to make a judgement call about what to display - it could show you the value rounded to a few decimal places, or a more precise value. Some debuggers do a better job than others at reading developers' minds, but it's a fundamental problem with binary floating point numbers.
Solution 5
Use the fixed-point decimal
type if you want stability at the limits of precision. There are overheads, and you must explicitly cast if you wish to convert to floating point. If you do convert to floating point you will reintroduce the instabilities that seem to bother you.
Alternately you can get over it and learn to work with the limited precision of floating point arithmetic. For example you can use rounding to make values converge, or you can use epsilon comparisons to describe a tolerance. "Epsilon" is a constant you set up that defines a tolerance. For example, you may choose to regard two values as being equal if they are within 0.0001 of each other.
It occurs to me that you could use operator overloading to make epsilon comparisons transparent. That would be very cool.
For mantissa-exponent representations EPSILON must be computed to remain within the representable precision. For a number N, Epsilon = N / 10E+14
System.Double.Epsilon
is the smallest representable positive value for the Double
type. It is too small for our purpose. Read Microsoft's advice on equality testing
Related videos on Youtube
Comments
-
yesraaj about 4 years
double r = 11.631; double theta = 21.4;
In the debugger, these are shown as
11.631000000000000
and21.399999618530273
.How can I avoid this?
-
Martin York over 15 yearsThere are better techniques than BCD.
-
Admin over 15 yearsIt would have been nice saying one or two of those techniques.
-
Jon Skeet over 15 yearsQuick note (but not a contradiction) - if you use the System.Decimal type in .NET, be aware that that's still a floating point type. It's a floating decimal point, but still a floating point. Oh, and also beware of System.Double.Epsilon, as it's not what you might expect it to be :)
-
tloach over 15 yearsIt is something programmers should be aware of though, especially if they work with very large or very small numbers where accuracy may be important.
-
Konrad Rudolph over 15 yearsJon, the question was originally tagged as C++/VC6 so we actually knew the platform before someone decided that this information wasn't important and edited the tags.
-
Sklivvz over 15 yearsKeith, actually none of your examples are irrational. Sqrt(2) is irrational, PI is irrational, but any integer divided by an integer is, by definition, rational.
-
Keith over 15 yearsYou're quite right - hence the single quotes. In math-theory these are rational numbers, they just can't be expressed in the storage mechanism used.
-
SquareCog over 15 yearsDark -- that's not actually true. The space of representable values is much denser near 0, and much more sparse as you go out to infinity ( for example, 2^24+1 can't be represented exactly using the IEEE floating point standard for 32-bit doubles)
-
Peter Wone over 15 yearsExponentially sparser, in fact, because you're applying an exponent.
-
Alessandro Jacopson over 15 yearsdouble theta = 21.4; bool b = theta == 21.4;// here b is always true
-
Reunanen over 15 yearsOne might actually prefer int(1000*x+.5) to make 21.4 appear as expected.
-
Nosredna almost 15 yearsIf you tried to do 3*1/3, you'd multiply the three by the one and have three. Then you'd divide three by three and no one should be mad. I'm assuming Joel meant to say 3*(1/3).
-
Peter Olson almost 13 years@Nosredna It depends whether the language you are using has a higher operator precedence for
*
or/
.