#define to double - different value?

46,693

Solution 1

Because -0.148759f is not a double, it's a float. Hence it's almost certainly the differing precision which is making a difference.

Either of these two variations should give you identical results:

#define THISVALUE -0.148759
double myDouble = -0.148759;  // Both double.

#define THISVALUE -0.148759f
double myDouble = -0.148759f; // Both float.

IEEE754 single precision values (commonly used in float) have only 32 bits available to them so have limited range and precision compared to double precision values (which have 64 bits).

As per the Wikipedia page on IEEE754, rough figures for range and precision are:

  • For singles, range ±10±38 with 7 digits precision.
  • For doubles, range ±10±308 with 15 digits precision.

And, as an aside, there's nowhere near as much reason for using macros nowadays, either for functions or objects. The former can be done with the inline suggestion and good compilers, the latter can be done with const int (or const double in your case) without losing any information between compilation stages (things like names and type information).

Solution 2

You have a trailing f in the define:

#define THISVALUE -0.148759f
                           ^
                           |

Which means that the literal in question is float precision, instead of the double default that you need. Remove that character.

Share:
46,693
Pugz
Author by

Pugz

Updated on December 01, 2020

Comments

  • Pugz
    Pugz over 3 years

    Here are two different ways I'm defining the same value. I want it to exist as a 64-bit (double precision) float point number (aka double).

    #define THISVALUE -0.148759f
    
    double myDouble = -0.148759;
    

    If I perform the following operation

    double tryingIt = THISVALUE;
    

    and I look at the value during debugging or print it, I can see it assigns tryingIt to -0.14875899255275726

    I understand that a floating point is not exact but this is just a crazy difference that really throws off my math. Directly assigning the double as in top code block gives me a value of -0.14875900000000000 in the debugger - exactly what it should be.

    Any thoughts on what's up?