Convert float to double loses precision but not via ToString

40,198

Solution 1

Its not a loss of precision .3 is not representable in floating point. When the system converts to the string it rounds; if you print out enough significant digits you will get something that makes more sense.

To see it more clearly

float f = 0.3f;
double d1 = System.Convert.ToDouble(f);
double d2 = System.Convert.ToDouble(f.ToString("G20"));

string s = string.Format("d1 : {0} ; d2 : {1} ", d1, d2);

output

"d1 : 0.300000011920929 ; d2 : 0.300000012 "

Solution 2

You're not losing precision; you're upcasting to a more precise representation (double, 64-bits long) from a less precise representation (float, 32-bits long). What you get in the more precise representation (past a certain point) is just garbage. If you were to cast it back to a float FROM a double, you would have the exact same precision as you did before.

What happens here is that you've got 32 bits allocated for your float. You then upcast to a double, adding another 32 bits for representing your number (for a total of 64). Those new bits are the least significant (the farthest to the right of your decimal point), and have no bearing on the actual value since they were indeterminate before. As a result, those new bits have whatever values they happened to have when you did your upcast. They're just as indeterminate as they were before -- garbage, in other words.

When you downcast from a double to a float, it'll lop off those least-significant bits, leaving you with 0.300000 (7 digits of precision).

The mechanism for converting from a string to a float is different; the compiler needs to analyze the semantic meaning of the character string '0.3f' and figure out how that relates to a floating point value. It can't be done with bit-shifting like the float/double conversion -- thus, the value that you expect.

For more info on how floating point numbers work, you may be interested in checking out this wikipedia article on the IEEE 754-1985 standard (which has some handy pictures and good explanation of the mechanics of things), and this wiki article on the updates to the standard in 2008.

edit:

First, as @phoog pointed out below, upcasting from a float to a double isn't as simple as adding another 32 bits to the space reserved to record the number. In reality, you'll get an addition 3 bits for the exponent (for a total of 11), and an additional 29 bits for the fraction (for a total of 52). Add in the sign bit and you've got your total of 64 bits for the double.

Additionally, suggesting that there are 'garbage bits' in those least significant locations a gross generalization, and probably not be correct for C#. A bit of explanation, and some testing below suggests to me that this is deterministic for C#/.NET, and probably the result of some specific mechanism in the conversion rather than reserving memory for additional precision.

Way back in the beforetimes, when your code would compile into a machine-language binary, compilers (C and C++ compilers, at least) would not add any CPU instructions to 'clear' or initialize the value in memory when you reserved space for a variable. So, unless the programmer explicitly initialized a variable to some value, the values of the bits that were reserved for that location would maintain whatever value they had before you reserved that memory.

In .NET land, your C# or other .NET language compiles into an intermediate language (CIL, Common Intermediate Language), which is then Just-In-Time compiled by the CLR to execute as native code. There may or may not be an variable initialization step added by either the C# compiler or the JIT compiler; I'm not sure.

Here's what I do know:

  • I tested this by casting the float to three different doubles. Each one of the results had the exact same value.
  • That value was exactly the same as @rerun's value above: double d1 = System.Convert.ToDouble(f); result: d1 : 0.300000011920929
  • I get the same result if I cast using double d2 = (double)f; Result: d2 : 0.300000011920929

With three of us getting the same values, it looks like the upcast value is deterministic (and not actually garbage bits), indicating that .NET is doing something the same way across all of our machines. It's still true to say that the additional digits are no more or less precise than they were before, because 0.3f isn't exactly equal to 0.3 -- it's equal to 0.3, up to seven digits of precision. We know nothing about the values of additional digits beyond those first seven.

Solution 3

I use decimal cast for correct result in this case and same other case

float ff = 99.95f;
double dd = (double)(decimal)ff;
Share:
40,198
JoshG
Author by

JoshG

Updated on July 09, 2022

Comments

  • JoshG
    JoshG almost 2 years

    I have the following code:

    float f = 0.3f;
    double d1 = System.Convert.ToDouble(f);
    double d2 = System.Convert.ToDouble(f.ToString());
    

    The results are equivalent to:

    d1 = 0.30000001192092896;
    d2 = 0.3;
    

    I'm curious to find out why this is?