Is a double really unsuitable for money?
Solution 1
Very, very unsuitable. Use decimal.
double x = 3.65, y = 0.05, z = 3.7;
Console.WriteLine((x + y) == z); // false
(example from Jon's page here - recommended reading ;-p)
Solution 2
You will get odd errors effectively caused by rounding. In addition, comparisons with exact values are extremely tricky - you usually need to apply some sort of epsilon to check for the actual value being "near" a particular one.
Here's a concrete example:
using System;
class Test
{
static void Main()
{
double x = 0.1;
double y = x + x + x;
Console.WriteLine(y == 0.3); // Prints False
}
}
Solution 3
Yes it's unsuitable.
If I remember correctly double has about 17 significant numbers, so normally rounding errors will take place far behind the decimal point. Most financial software uses 4 decimals behind the decimal point, that leaves 13 decimals to work with so the maximum number you can work with for single operations is still very much higher than the USA national debt. But rounding errors will add up over time. If your software runs for a long time you'll eventually start losing cents. Certain operations will make this worse. For example adding large amounts to small amounts will cause a significant loss of precision.
You need fixed point datatypes for money operations, most people don't mind if you lose a cent here and there but accountants aren't like most people..
edit
According to this site http://msdn.microsoft.com/en-us/library/678hzkk9.aspx Doubles actually have 15 to 16 significant digits instead of 17.
@Jon Skeet decimal is more suitable than double because of its higher precision, 28 or 29 significant decimals. That means less chance of accumulated rounding errors becoming significant. Fixed point datatypes (ie integers that represent cents or 100th of a cent like I've seen used) like Boojum mentions are actually better suited.
Solution 4
Since decimal
uses a scaling factor of multiples of 10, numbers like 0.1 can be represented exactly. In essence, the decimal type represents this as 1 / 10 ^ 1, whereas a double
would represent this as 104857 / 2 ^ 20 (in reality it would be more like really-big-number / 2 ^ 1023).
A decimal
can exactly represent any base 10 value with up to 28/29 significant digits (like 0.1). A double
can't.
Solution 5
My understanding is that most financial systems express currency using integers -- i.e., counting everything in cents.
IEEE double precision actually can represent all integers exactly in the range -2^53 through +2^53. (Hacker's Delight, pg. 262) If you use only addition, subtraction and multiplication, and keep everything to integers within this range then you should see no loss of precision. I'd be very wary of division or more complex operations, however.
Jon Gretar
I'm both a Mac and Windows user. I like programming web applications. My languages of choice are Python, Javascript and C#.
Updated on June 19, 2022Comments
-
Jon Gretar almost 2 years
I always tell in c# a variable of type double is not suitable for money. All weird things could happen. But I can't seem to create an example to demonstrate some of these issues. Can anyone provide such an example?
(edit; this post was originally tagged C#; some replies refer to specific details of
decimal
, which therefore meansSystem.Decimal
).(edit 2: I was specific asking for some c# code, so I don't think this is language agnostic only)
-
Jon Skeet over 15 yearsDarn it, if I'd known I had an example on my own page, I wouldn't have come up with a different one ;)
-
Jon Skeet over 15 yearsNote that System.Decimal, the suggested type to use in .NET, is still a floating point type - but it's a floating decimal point rather than a floating binary point. That's more important than having fixed precision in most cases, I suspect.
-
Jon Skeet over 15 yearsDecimal doesn't have 96 significant digits. It has 96 significant bits. Decimal has around 28 significant digits.
-
Adam Davis over 15 yearsIn which language are you speaking of the decimal type? Or do all languages that support this type support it in exactly the same way? Might want to specify.
-
Marc Gravell over 15 years@Adam - this post originally had the C# tag, so we are talking about System.Decimal specifically.
-
Richard Poole over 15 yearsOops, well spotted Jon! Corrected. Adam, I'm talking C#, as per the question. Do any other languages have a type called decimal?
-
Jon Skeet over 15 yearsIf you're only going to use integers though, why not use an integer type to start with?
-
Steve Jessop over 15 yearsHeh - int64_t can represent all integers exactly in the range -2^63 to +2^63-1. If you use only addition, subtraction and multiplication, and keep everything to integers within this range then you should see no loss of precision. I'd be very wary of division, however.
-
Daniel Pryden over 14 yearsCareful. Any floating-point representation will have rounding errors, decimal included. It's just that decimal will round in ways that are intuitive to humans (and generally appropriate for money), and binary floating point won't. But for non-financial number-crunching, double is often much, much better than decimal, even in C#.
-
awe over 14 years@Richard: Well, all languages that is based on .NET does, since System.Decimal is not a unique C# type, it is a .NET type.
-
Richard Poole over 14 years@awe - I meant non-.NET languages. I'm ignorantly unaware of any that have a native base 10 floating point type, but I have no doubt they exist.
-
Jeffrey Hantin over 13 yearsThat's precisely the issue. Currency is nowadays typically decimal. Back before the US stock markets decimalized, however, binary fractions were in use (I started seeing 256ths and even 1024ths at one point) and so doubles would have been more appropriate than decimals for stock prices! Pre-decimalization pounds sterling would have been a real pain though at 960 farthings to the pound; that's neither decimal nor binary, but it certainly provides a generous variety of prime factors for easy fractions.
-
Gabe about 13 yearsEven more important than just beind a decimal floating point,
decimal
the expressionx + 1 != x
is always true. Also, it retains precision, so you can tell the difference between1
and1.0
. -
supercat almost 12 yearsSome antiquated systems which are (alas?) still in use support
double
, but do not support any 64-bit integer type. I would suggest that performing calculations asdouble
, scaled that any semantically-required rounding will always be to whole units, is apt to be the most efficient approach. -
supercat about 11 years@Gabe: Those properties are only meaningful if one scales one's values so that a value of 1 represents the smallest currency unit. A
Decimal
value may lose precision to the right of the decimal point without indicating any problem. -
MuertoExcobito almost 9 yearsLinking an article you wrote that disagrees with decades of common practices and expert options that floating-point is unsuitable for financial transaction representations is going to have to have a little more backup than a single page.
-
vikingben over 7 yearsIf you are consuming a service that returns double currency values that you cannot control are there gotchas to think about in converting them to decimal ? Precision loss etc...
-
Jon Skeet over 7 years@vikingben: Absolutely - fundamentally, that's a broken way of doing things, and you need to work out how you're best to interpret the data.
-
user207421 almost 6 years
double
has 15.9 significant decimal digits considering integer values only. The situation after the decimal point is value-dependent. -
Lujun Weng about 2 yearsMakes much more sense