Why can't Double be implicitly cast to Decimal

10,460

Solution 1

If you convert from double to decimal, you can lose information - the number may be completely out of range, as the range of a double is much larger than the range of a decimal.

If you convert from decimal to double, you can lose information - for example, 0.1 is exactly representable in decimal but not in double, and decimal actually uses a lot more bits for precision than double does.

Implicit conversions shouldn't lose information (the conversion from long to double might, but that's a different argument). If you're going to lose information, you should have to tell the compiler that you're aware of that, via an explicit cast.

That's why there aren't implicit conversions either way.

Solution 2

Decimal is more precise, so you would lose information. That's why you can only do it explicitely. It's to protect you from losing information. See MSDN

http://msdn.microsoft.com/en-us/library/678hzkk9%28v=VS.100%29.aspx

http://msdn.microsoft.com/en-us/library/364x0z75.aspx

Solution 3

You can explicitly cast in both directions: from double to decimal and from decimal to double.

You can't implicitly convert in either direction for a very good reason: the conversion may not be loss-less.

For example, the decimal number 1234567890123456789 can not be exactly represented as a double. Likewise, the double number 10^32 cannot be exactly represented as a decimal number.

To avoid losing information unintentionally, the implicit conversion is disallowed.

Share:
10,460
Maxim Gershkovich
Author by

Maxim Gershkovich

Developer with experience in. ASP.NET Azure Point of sale software C# VB.NET .NET Framework Sharepoint MVC Microsoft Kinect for Windows 1.8 & 2

Updated on June 14, 2022

Comments

  • Maxim Gershkovich
    Maxim Gershkovich almost 2 years

    I don't understand the casting rules when it comes to decimal and double.

    It is legal to do this

    decimal dec = 10;
    double doub = (double) dec;
    

    What confuses me however is that decimal is a 16 byte datatype and double is 8 bytes so isn't casting a double to a decimal a widening conversation and should therefore be allowed implicitly; with the example above disallowed?

    double doub = 3.2;
    decimal dec = doub; // CS0029: Cannot implicitly convert type 'double' to 'decimal'
    
  • Eric Lippert
    Eric Lippert over 12 years
    You are of course correct, but I'll take this opportunity to point out that there are, unfortunately, a few built-in implicit conversions that lose information -- long to double, for example. None of the built-in implicit conversions lose magnitude, but some of them lose precision. We could have made the magnitude-preserving-but-precision-losing conversion from decimal to double also implicit, but chose not to.
  • Eric Lippert
    Eric Lippert over 12 years
    The reasoning here is not just that you can lose information; it is that the conversion is fundamentally a goofy thing to do. A double is intended to represent something like an imprecise physical quantity, like a scientific measurement. A decimal is intended to represent an exact quantity, like a stock price or a mortgage balance. If you are converting one to the other -- say, you are converting stock prices to double in order to use them in with a statistical analysis library written to take doubles -- then you should be clear that you intend the precision-losing conversion.