Difference between decimal, float and double in .NET?

1,110,787

Solution 1

float and double are floating binary point types. In other words, they represent a number like this:

10001.10010110011

The binary number and the location of the binary point are both encoded within the value.

decimal is a floating decimal point type. In other words, they represent a number like this:

12345.65789

Again, the number and the location of the decimal point are both encoded within the value – that's what makes decimal still a floating point type instead of a fixed point type.

The important thing to note is that humans are used to representing non-integers in a decimal form, and expect exact results in decimal representations; not all decimal numbers are exactly representable in binary floating point – 0.1, for example – so if you use a binary floating point value you'll actually get an approximation to 0.1. You'll still get approximations when using a floating decimal point as well – the result of dividing 1 by 3 can't be exactly represented, for example.

As for what to use when:

  • For values which are "naturally exact decimals" it's good to use decimal. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.

  • For values which are more artefacts of nature which can't really be measured exactly anyway, float/double are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.

Solution 2

Precision is the main difference.

Float - 7 digits (32 bit)

Double-15-16 digits (64 bit)

Decimal -28-29 significant digits (128 bit)

Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float.

Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.

float flt = 1F/3;
double dbl = 1D/3;
decimal dcm = 1M/3;
Console.WriteLine("float: {0} double: {1} decimal: {2}", flt, dbl, dcm);

Result :

float: 0.3333333  
double: 0.333333333333333  
decimal: 0.3333333333333333333333333333

Solution 3

+---------+----------------+---------+----------+---------------------------------------------------------+
| C#      | .Net Framework | Signed? | Bytes    | Possible Values                                         |
| Type    | (System) type  |         | Occupied |                                                         |
+---------+----------------+---------+----------+---------------------------------------------------------+
| sbyte   | System.Sbyte   | Yes     | 1        | -128 to 127                                             |
| short   | System.Int16   | Yes     | 2        | -32,768 to 32,767                                       |
| int     | System.Int32   | Yes     | 4        | -2,147,483,648 to 2,147,483,647                         |
| long    | System.Int64   | Yes     | 8        | -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 |
| byte    | System.Byte    | No      | 1        | 0 to 255                                                |
| ushort  | System.Uint16  | No      | 2        | 0 to 65,535                                             |
| uint    | System.UInt32  | No      | 4        | 0 to 4,294,967,295                                      |
| ulong   | System.Uint64  | No      | 8        | 0 to 18,446,744,073,709,551,615                         |
| float   | System.Single  | Yes     | 4        | Approximately ±1.5e-45 to ±3.4e38                       |
|         |                |         |          |  with ~6-9 significant figures                          |
| double  | System.Double  | Yes     | 8        | Approximately ±5.0e-324 to ±1.7e308                     |
|         |                |         |          |  with ~15-17 significant figures                        |
| decimal | System.Decimal | Yes     | 16       | Approximately ±1.0e-28 to ±7.9e28                       |
|         |                |         |          |  with 28-29 significant figures                         |
| char    | System.Char    | N/A     | 2        | Any Unicode character (16 bit)                          |
| bool    | System.Boolean | N/A     | 1 / 2    | true or false                                           |
+---------+----------------+---------+----------+---------------------------------------------------------+

See here for more information.

Solution 4

The Decimal structure is strictly geared to financial calculations requiring accuracy, which are relatively intolerant of rounding. Decimals are not adequate for scientific applications, however, for several reasons:

  • A certain loss of precision is acceptable in many scientific calculations because of the practical limits of the physical problem or artifact being measured. Loss of precision is not acceptable in finance.
  • Decimal is much (much) slower than float and double for most operations, primarily because floating point operations are done in binary, whereas Decimal stuff is done in base 10 (i.e. floats and doubles are handled by the FPU hardware, such as MMX/SSE, whereas decimals are calculated in software).
  • Decimal has an unacceptably smaller value range than double, despite the fact that it supports more digits of precision. Therefore, Decimal can't be used to represent many scientific values.

Solution 5

I won't reiterate tons of good (and some bad) information already answered in other answers and comments, but I will answer your followup question with a tip:

When would someone use one of these?

Use decimal for counted values

Use float/double for measured values

Some examples:

  • money (do we count money or measure money?)

  • distance (do we count distance or measure distance? *)

  • scores (do we count scores or measure scores?)

We always count money and should never measure it. We usually measure distance. We often count scores.

* In some cases, what I would call nominal distance, we may indeed want to 'count' distance. For example, maybe we are dealing with country signs that show distances to cities, and we know that those distances never have more than one decimal digit (xxx.x km).

Share:
1,110,787
PC Luddite
Author by

PC Luddite

Java developer with experience in data analytics, business intelligence and automation. C#, Java, SQL, Visual Basic, SAS, C, and (rarely) C++ Email: [email protected] GitHub: https://github.com/pcluddite

Updated on July 29, 2022

Comments

  • PC Luddite
    PC Luddite almost 2 years

    What is the difference between decimal, float and double in .NET?

    When would someone use one of these?

  • BrainSlugs83
    BrainSlugs83 almost 13 years
    They sure can! They also also have a couple of "magic" values such as Infinity, Negative Infinity, and NaN (not a number) which make it very useful for detecting vertical lines while computing slopes... Further, if you need to decide between calling float.TryParse, double.TryParse, and decimal.TryParse (to detect if a string is a number, for example), I recommend using double or float, as they will parse "Infinity", "-Infinity", and "NaN" properly, whereas decimal will not.
  • Hammad Khan
    Hammad Khan over 12 years
    This answer needs to be corrected. Precision for Decimal is not 128 bits but infinite because the format is essentially different from float. @Skeet answer is the best. Example: 0.1 = 0.099999.... in float but in decimal it is 0.l, that is infinite precision. If you were to use 128 bits precision like in floats, you would get 0.999999....(upto 29 digits) but that is still not precise as decimal 0.1
  • Erik P.
    Erik P. over 12 years
    @Thecrocodilehunter: sorry, but no. Decimal can represent all numbers that can be represented in decimal notation, but not 1/3 for example. 1.0m / 3.0m will evaluate to 0.33333333... with a large but finite number of 3s at the end. Multiplying it by 3 will not return an exact 1.0.
  • Hammad Khan
    Hammad Khan over 12 years
    This is a fault with the number itself ( 0.3333... in this case), not its decimal representation where it is produced 100% faithfully. When you introduced an error in the number, no body can remove it (not even decimal numbers). The only way to remove error from this number is to use 1/3 not 0.333. Some calculator might take 1/3 as mid value but most of them don't. Try this: represent 0.3333 in floating point, you will end up with 0.3332999998..., this is not 0.3333 (you see the error). Now represent this in decimal it is 0.3333 (exactly as it is, no error - 100% accurate).
  • Igby Largeman
    Igby Largeman over 12 years
    @Thecrocodilehunter: I think you're confusing accuracy and precision. They are different things in this context. Precision is the number of digits available to represent a number. The more precision, the less you need to round. No data type has infinite precision.
  • Hammad Khan
    Hammad Khan over 12 years
    @IgbyLargeman Precision and Accuracy is used in context of measuring a value by an instrument. In this case we are not talking about any instrument. We are only talking about representing a value faithfully by decimal vs floating point. Precision does not apply here as we are not talking about consistency of measuring the same value, over and over. But Accuracy does. Accuracy of decimal point on a number that is in its range is 100%, that is infinite accuracy.
  • Daniel Pryden
    Daniel Pryden over 12 years
    @Thecrocodilehunter: You're assuming that the value that is being measured is exactly 0.1 -- that is rarely the case in the real world! Any finite storage format will conflate an infinite number of possible values to a finite number of bit patterns. For example, float will conflate 0.1 and 0.1 + 1e-8, while decimal will conflate 0.1 and 0.1 + 1e-29. Sure, within a given range, certain values can be represented in any format with zero loss of accuracy (e.g. float can store any integer up to 1.6e7 with zero loss of accuracy) -- but that's still not infinite accuracy.
  • Hammad Khan
    Hammad Khan over 12 years
    @DanielPryden, Ok I believe float will represent 0.1 as 0.1, not as 0.1 + 1e-29. This is because the format is essentially different than float. That is why it is very slow but accurate. If you were right than decimal is useless. Remember the main problem in float if(0.1 = 0.1) this condition does not holds true when we think it should be true. In decimal it will ALWAYS be true because 0.1 will be 0.1 and nothing else. For example it will not be 0.99999999999999999999999999999.
  • Daniel Pryden
    Daniel Pryden over 12 years
    @Thecrocodilehunter: You missed my point. 0.1 is not a special value! The only thing that makes 0.1 "better" than 0.10000001 is because human beings like base 10. And even with a float value, if you initialize two values with 0.1 the same way, they will both be the same value. It's just that that value won't be exactly 0.1 -- it will be the closest value to 0.1 that can be exactly represented as a float. Sure, with binary floats, (1.0 / 10) * 10 != 1.0, but with decimal floats, (1.0 / 3) * 3 != 1.0 either. Neither is perfectly precise.
  • Hammad Khan
    Hammad Khan over 12 years
    @DanielPryden, with decimal number, it will be exactly 0.1. Of course it is not about 0.1 only. A large number of decimals numbers has this problem. The fact is in decimal (0.1 == 0.1) will always be true. In float it may or may not be true because the actual binary value may not be exactly 0.1.
  • Daniel Pryden
    Daniel Pryden over 12 years
    @Thecrocodilehunter: You still don't understand. I don't know how to say this any more plainly: In C, if you do double a = 0.1; double b = 0.1; then a == b will be true. It's just that a and b will both not exactly equal 0.1. In C#, if you do decimal a = 1.0m / 3.0m; decimal b = 1.0m / 3.0m; then a == b will also be true. But in that case, neither of a nor b will exactly equal 1/3 -- they will both equal 0.3333.... In both cases, some accuracy is lost due to representation. You stubbornly say that decimal has "infinite" precision, which is false.
  • Daniel Pryden
    Daniel Pryden over 12 years
    @Thecrocodilehunter: Just in case you don't believe me, here's some sample code that shows that 0.1 == 0.1.
  • Chibueze Opata
    Chibueze Opata almost 12 years
    This should have been marked as the correct answer. Jon Skeet's answer's a bit confusing...
  • naveen jayanna
    naveen jayanna over 11 years
    Out of curiosity, what was the raw value of cellValue.ToString()? Decimal.TryParse("0.00006317592", out val) seems to work...
  • Brian
    Brian over 11 years
    @ChibuezeOpata: Skeet's answer discusses a completely separate difference which this answer completely ignores. Personally, I consider Skeet's answer to be more valuable, as his answer is more relevant in deciding which data type to use.
  • Chibueze Opata
    Chibueze Opata over 11 years
    @Brian They are both very valuable, and that is why I said ab initio that they are incomplete without each other. Concerning the question asked however, this answer simply goes straight to the point and tells you the essential differences. You can make almost all the deductions in Jon Skeet's answer from this one. :)
  • weston
    weston almost 11 years
    -1 Don't get me wrong, if true, it's very interesting but this is a separate question, it's certainly not an answer to this question.
  • svick
    svick almost 11 years
    @ChibuezeOpata No, you can't, because this answer doesn't even mention the decimal/binary distinction.
  • Erik Funkenbusch
    Erik Funkenbusch almost 11 years
    @DanielPryden - I know this is an old issue, but maybe I can help clarify. The issue here is that Decimal numbers are 100% accurate when representing numbers that are within the precision of the decimal format. That is, not the result of pi, or 1/3, or 2/3. That's irrelevant because those numbers require greater precision than decimal can represent. If you do a calculation on a decimal value that exceeds the precision, then all bets are off. With float/double numbers that ARE within the precision of the format are not always 100% accurate. .1 for example.
  • Daniel Pryden
    Daniel Pryden almost 11 years
    @MystereMan: what do you mean by "within the precision of the decimal format"? If the number you are measuring is exactly an integer raised to a power of ten, then absolutely use a decimal. Many numbers encountered in everyday life have this property (because the are discrete, not continuous, measurements), but many others do not. The correct data type for any purpose always depends on the purpose. Please don't mistake anything I'm saying here as implying that anyone should always use floats -- I'm merely arguing that one shouldn't blindly always use decimals instead.
  • Daniel Pryden
    Daniel Pryden almost 11 years
    @MystereMan: I think part of your confusion is betrayed by the phrase "within the precision of the format". I don't think that makes sense -- do you mean something like "within the representable range" instead? But even that doesn't prove anything: 0.1 is not any more "within the precision" of a double than 2^53+1 is, and both can be represented equally faithfully.
  • Dan Nissenbaum
    Dan Nissenbaum almost 11 years
    Pretty much every time the issue of precision of floating point representation (be it decimal or binary) comes up, there ensues a long conversation of comments at cross-purposes. Fundamentally this is due to the question of whether the exact value represented by the floating point representation corresponds to the same exact value in the real world. This cannot be known by looking at the representation of the number itself; it can only be known by the humans that use the representation.
  • David Mårtensson
    David Mårtensson almost 11 years
    Here are a small example code for C# (which this article is about) that visualizes the problem (using decimal & float). (0.1f == 1f/10) and (0.1m == 1m/10). The first will evaluate to false while the second will evaluate to true, even though both should evaluate to true. This is due to the fact that float cannot exactly store the value 0.1.
  • supercat
    supercat over 10 years
    @DavidMårtensson: Why should 0.1f not equal 1f/10? Should not both evaluate to 13421773/134217728?
  • David Mårtensson
    David Mårtensson over 10 years
    @supercat Because how float works internally 0.1f cannot be exactly represented in the internal binary format and due to the fact that calculations use more precision internally 1f/10 will not land on the same rounded value as 0.1f, hence they will not be equal.
  • supercat
    supercat over 10 years
    @DavidMårtensson: The compile-time type of the expression 1f/10 is float. Are you saying that compilers are not required to round the result of the division to the nearest float before performing the comparison? I regard as somewhat broken the fact that one is allowed to directly compare a float to anything else, or a double to anything else other than 32-bit-or-smaller integers [I think a cast should be required] but I would consider severely broken a compiler that performed what was by the rules of the language a float/float comparison as though it were a float/double comparison.
  • supercat
    supercat over 10 years
    @DavidMårtensson: (Incidentally, what I'd like to see would be a language with both "loose" and "strict" 32- and 64-bit floating-point types, where the strict ones would not accept any implicit conversions and the "loose" one would be defined as extending operation results to double and would generally allow implicit down-conversions to float, but would disallow direct comparisons between 32-bit and 64-bit values. I would posit that while C# will have no qualm about double d1=f1*f2; it would be rare for the programmer to actually intend that d1 might hold a float-precision result.)
  • David Mårtensson
    David Mårtensson over 10 years
    The IEEE standard for binary floating point does not mandate strict decimal precision, see Mark Jones answer below. It is not defined by the language. If you require strict rounding you should use the decimal datatype which is a decimal floating point as Jon Skeet points out in Mehrdad's answer below. The different types have different uses and different requirements. When calculating real world values in physics for example, your original numbers are probably less precise than your compiler so the computational errors will usually have less impact that measurement errors.
  • Matt
    Matt about 10 years
    Yet another attempt to try to hit the nail on the head: Both float and double can exactly represent fractions of the form p/q where q is a power of 2. E.g. 0.5, 3.25, 1/256, etc. decimal however can exactly represent fractions of the form p/q where q is a power of 10 (ten). See this answer. Though it is correct that decimal has more significant digits, it is misleading to leave it at that; the representation is fundamentally different than float and double which lends decimal to precise decimal calculations.
  • supercat
    supercat almost 10 years
    @RogerLipscombe: I would consider double proper in accounting applications in those cases (and basically only those cases) where no integer type larger than 32 bits was available, and the double was being used as though it were a 53-bit integer type (e.g. to hold a whole number of pennies, or a whole number of hundredths of a cent). Not much use for such things nowadays, but many languages gained the ability to use double-precision floating-point values long before they gained 64-bit (or in some cases even 32-bit!) integer math.
  • Randall Sutton
    Randall Sutton almost 10 years
    Precision is not the main difference. Decimal being base 10 is the main difference.
  • Mingwei Samuel
    Mingwei Samuel almost 10 years
    float/double usually do not represent numbers as 101.101110, normally it is represented as something like 1101010 * 2^(01010010) - an exponent
  • Jon Skeet
    Jon Skeet almost 10 years
    @Hazzard: That's what the "and the location of the binary point" part of the answer means.
  • SergioL
    SergioL over 9 years
    Maybe because the Excel cell was returning a double and ToString() value was "6.31759E-05" therefore the decimal.Parse() didn't like the notation. I bet if you checked the return value of Decimal.TryParse() it would have been false.
  • supercat
    supercat over 9 years
    float.MaxValue+1 == float.MaxValue, just as decimal.MaxValue+0.1D == decimal.MaxValue. Perhaps you meant something like float.MaxValue*2?
  • GorkemHalulu
    GorkemHalulu over 9 years
    @supercar But it is not true that decimal.MaxValue + 1 == decimal.MaxValue
  • GorkemHalulu
    GorkemHalulu over 9 years
    @supercar decimal.MaxValue + 0.1m == decimal.MaxValue ok
  • supercat
    supercat over 9 years
    The System.Decimal throws an exception just before it becomes unable to distinguish whole units, but if an application is supposed to be dealing with e.g. dollars and cents, that could be too late.
  • Rama
    Rama over 9 years
    Your answer implies precision is the only difference between these data types. Given binary floating point arithmetic is typically implemented in hardware FPU, performance is a significant difference. This may be inconsequential for some applications, but is critical for others.
  • Brett Caswell
    Brett Caswell over 9 years
    I'm surprised it hasn't been said already, float is a C# alias keyword and isn't a .Net type. it's System.Single.. single and double are floating binary point types.
  • BrainSlugs83
    BrainSlugs83 about 9 years
    -1 while the main difference between float and double is precision, the main difference between float, double, and decimal is not. It's true that decimal does have a wider precision, but more importantly, it also stores the values in a decimal-centric format, as opposed to float and double, which store their values in binary-centric format. To give an example, the number ".75" in decimal is equivalent to ".11" in binary, because one half plus one forth == three fourths. Naturally, some fractional decimal values (even within the ~7 digit range) can only be approximated by double and float.
  • BrainSlugs83
    BrainSlugs83 about 9 years
    @supercat double is never proper in accounting applications. Because Double can only approximate decimal values (even within the range of its own precision). This is because double stores the values in a base-2 (binary)-centric format.
  • BrainSlugs83
    BrainSlugs83 about 9 years
    You left out the biggest difference, which is the base used for the decimal type (decimal is stored as base 10, all other numeric types listed are base 2).
  • supercat
    supercat about 9 years
    @BrainSlugs83: Use of floating-point types to hold non-whole-number quantities would be improper, but it was historically very common for languages to have floating-point types that could precisely represent larger whole-number values than their integer types could represent. Perhaps the most extreme example was Turbo-87 whose only integer types were limited to -32768 to +32767, but whose Real could IIRC represent values up to 1.8E+19 with unit precision. I would think it would be much saner for an accounting application to use Real to represent a whole number of pennies than...
  • supercat
    supercat about 9 years
    ...for it to try to perform multi-precision math using a bunch of 16-bit values. For most other languages the difference wasn't that extreme, but for a long time it has been very common for languages not to have any integer type that went beyond 4E9 but have a double type which had unit accuracy up to 9E15. If one needs to store whole numbers which are bigger than the largest available integer type, using double is apt to be simpler and more efficient than trying to fudge multi-precision math, especially given that while processors have instructions to perform 16x16->32 or...
  • supercat
    supercat about 9 years
    ...32x32->64 multiplication, programming languages generally don't.
  • Robino
    Robino almost 9 years
    @weston Answers often complement other answers by filling in nuances they have missed. This answer highlights a difference in terms of parsing. It is very much an answer to the question!
  • phoog
    phoog almost 9 years
    @Matt decimal can exactly represent fractions of the form p/q when q is a power of 2 or a power of 5 (i.e., prime factors of 10). Consider 1/2 (0.5) and 1/5 (0.2), for example; neither denominator is a power of 10.
  • phoog
    phoog almost 9 years
    @hmd consider a floating-point base-3 system, where 1/10 (rather 1/101) is an infinitely repeating fraction: 0.00220022.... However, 1/3 is not; it is 0.1. Consider Matt's comment: Fractions that can be exactly represented in a given base are those that use the prime factors of the base. Decimal does not have infinite precision; it has 28 decimal digits of precision. If it truly had infinite precision, you would be able to represent half of 0.0000000037252902984619140625m. But you can't; dividing that by 2 gives 0.0000000018626451492309570312m instead of 0.00000000186264514923095703125
  • Matt
    Matt almost 9 years
    @phoog There is no requirement that p/q be in simplest form. In your examples, 1/2=5/10 and 1/5=2/10 and therefore have exact decimal representations. Another example is 1/20=0.05 in which the denominator is neither a power of 2, 5 or 10. You said "decimal can exactly represent fractions of the form p/q when q is a power of 2 or a power of 5". Though technically correct, this is actually more restrictive than what I said because 1/10, for example, cannot be written in the form p/q where q is a power of 2 or 5.
  • phoog
    phoog almost 9 years
    @hmd In addition to the values, like 0.1, that can be represented as decimal but not as double, there are some values that can be exactly represented as double but not decimal. Consider the fraction 1 / 2^31. The decimal representation is truncated, while the double representation is exact. The .NET string representation of the double is not exact, but the in-memory bit representation is exact. Jon Skeet has a class that will convert any double to the exact decimal string representation, which can be quite long: csharpindepth.com/Articles/General/FloatingPoint.aspx
  • phoog
    phoog almost 9 years
    @Matt I also oversimplified. The real requirement is that after reducing the fraction to its simplest form q is the product of a power of 2 and a power of 5; that is, q's unique prime factors must be the same as or a subset of the unique prime factors of the base. You can of course equivalently recast that as all of q`s prime factors being either a prime factor of the base or a divisor of p.
  • deegee
    deegee almost 9 years
    The value ranges for the Single and Double are not depicted correctly in the above image or the source forum post. Since we can't easily superscript the text here, use the caret character: Single should be 10^-45 and 10^38, and Double should be 10^-324 and 10^308. Also, MSDN has the float with a range of -3.4x10^38 to +3.4x10^38. Search MSDN for System.Single and System.Double in case of link changes. Single: msdn.microsoft.com/en-us/library/b1e65aza.aspx Double: msdn.microsoft.com/en-us/library/678hzkk9.aspx
  • BenKoshy
    BenKoshy over 8 years
    wait....isn't a decimal represented in 1s and 0s eventually? I thought computers could only work in binary form. so then a decimal is eventually a binary type isn't it?
  • Jon Skeet
    Jon Skeet over 8 years
    @BKSpurgeon: Well, only in the same way that you can say that everything is a binary type, at which point it becomes a fairly useless definition. Decimal is a decimal type in that it's a number represented as an integer significand and a scale, such that the result is significand * 10^scale, whereas float and double are significand * 2^scale. You take a number written in decimal, and move the decimal point far enough to the right that you've got an integer to work out the significand and the scale. For float/double you'd start with a number written in binary.
  • Ehsan
    Ehsan about 8 years
    The other aspect is conversion between these data types: Single and Double Data types -use "Fuzzy" comparison -conversion from double to single loses precision -conversion from single to double creates inaccuracy -conversion to/from decimal introduces rounding errors between bases -Create a team consistence[Style Guide] on the data type you're using and watch for conversion
  • James Moore
    James Moore about 8 years
    If you're doing financial calculations, you absolutely have to roll your own datatypes or find a good library that matches your exact needs. Accuracy in a financial setting is defined by (human) standards bodies and they have very specific localized (both in time and geography) rules about how to do calculations. Things like correct rounding aren't captured in the simple numeric datatypes in .Net. The ability to do calculations is only a very small part of the puzzle.
  • David
    David over 7 years
    Another difference: float 32-bit; double 64-bit; and decimal 128-bit.
  • Drew Noakes
    Drew Noakes over 7 years
    Compilation only fails if you attempt to divide a literal decimal by zero (CS0020), and the same is true of integral literals. However if a runtime decimal value is divided by zero, you'll get an exception not a compile error.
  • Winter
    Winter about 7 years
    @BrainSlugs83 However, you might not want to parse "Infinity" or "NaN" depending on the context. Seems like a good exploit for user input if the developper isn't rigourous enough.
  • BrainSlugs83
    BrainSlugs83 over 6 years
    The "point something" you mentioned is generally referred to as "the fractional part" of a number. "Floating point" does not mean "a number with a point something on the end"; but instead "Floating Point" distinguishes the type of number, as opposed to a "Fixed Point" number (which can also store a fractional value); the difference is whether the precision is fixed, or floating. -- Floating point numbers give you a much bigger dynamic range of values (Min and Max), at the cost of precision, whereas a fixed point numbers give you a constant amount of precision at the cost of range.
  • BrainSlugs83
    BrainSlugs83 over 6 years
    Er... decimal.Parse("0.00006317592") works -- you've got something else going on. -- Possibly scientific notation?
  • BrainSlugs83
    BrainSlugs83 over 6 years
    The difference is more than just precision. -- decimal is actually stored in decimal format (as opposed to base 2; so it won't lose or round digits due to conversion between the two numeric systems); additionally, decimal has no concept of special values such as NaN, -0, ∞, or -∞.
  • BrainSlugs83
    BrainSlugs83 over 6 years
    Pretty much all modern systems, even cell phones, have hardware support for double; and if you game has even simple physics, you will notice a big difference between double and float. (For example, calculating the velocity / friction in a simple Asteroids clone, doubles allow acceleration to flow much more fluidly than float. -- Seems like it shouldn't matter, but it totally does.)
  • yoyo
    yoyo over 6 years
    Doubles are also double the size of floats, meaning you need to chew through twice as much data, which hurts your cache performance. As always, measure and proceed accordingly.
  • Mark Dickinson
    Mark Dickinson over 6 years
    What does this answer add that isn't already covered in the existing answers? BTW, your "or" in the "decimal" line is incorrect: the slash in the web page that you're copying from indicates division rather than an alternative.
  • Mark Dickinson
    Mark Dickinson over 6 years
    And I'd dispute strongly that precision is the main difference. The main difference is the base: decimal floating-point versus binary floating-point. That difference is what makes Decimal suitable for financial applications, and it's the main criterion to use when deciding between Decimal and Double. It's rare that Double precision isn't enough for scientific applications, for example (and Decimal is often unsuitable for scientific applications because of its limited range).
  • user1477332
    user1477332 over 5 years
    Decimal is 128 bits ... means it occupies 16 bytes not 12
  • John Henckel
    John Henckel about 5 years
    I really like this answer, especially the question "do we count or measure money?" However, other than money, I can't think of anything that is "counted" that is not simply integer. I have seen some applications that use decimal simply because double has too few significant digits. In other words, decimal might be used because C# does not have a quadruple type en.wikipedia.org/wiki/Quadruple-precision_floating-point_for‌​mat
  • Andrzej Gis
    Andrzej Gis almost 5 years
    @JonSkeet For floats/doubles we get: Console.WriteLine(0.1 + 0.2 == 0.3); // false. If I get it right, it's not equal because of the conversion from decimal notation we use in code to the binary notation used in memory. Can we do it the other way though? Initialize decimal variables with binary notation in code and then get a similar mismatch?
  • Jon Skeet
    Jon Skeet almost 5 years
    @AndrzejGis: No, because every binary value is exactly representable in decimal. (Basically because 2 is a factor of 10.)
  • Prabu
    Prabu over 4 years
    @JonSkeet What are some examples of more artefacts of nature? Would you consider the speed (mph) or consumption (litres/minute) or latitude/longitude good candidates for a double?
  • Jon Skeet
    Jon Skeet over 4 years
    @Prabu: Yes, those feel pretty natural to me.
  • Robert McKee
    Robert McKee over 4 years
    decimal.Parse("0.00006317592") works, but decimal.Parse(0.00006317592.ToString()) does not as @SergioL suggested. 0.00006317592.ToString() becomes 6.317592E-05 and decimal.Parse does not like that.
  • Jose Henrique
    Jose Henrique about 4 years
    Better answer!! =))
  • awe
    awe over 3 years
    @ChibuezeOpata: Jon Skeet's answer might be a bit confusing, but it has infinite accuracy....
  • Twisted Code
    Twisted Code over 3 years
    as I tried to suggest by editing (before I had enough reputation to comment. Sorry If my suggested edit wasted yours or anyone else's time, by the way), I feel the note about performance at the end of your second bullet point should either not be there at all, or be expanded (as I tried to do) into its own bullet point. It seems out of place as is.
  • Jon Skeet
    Jon Skeet over 3 years
    @TwistedCode: I can't see your suggested edit now, but I'm reasonably comfortable with it being there.
  • Andrea Leganza
    Andrea Leganza over 3 years
    Someone knows why these different digits range for every type?
  • Reid Moffat
    Reid Moffat about 2 years
    The question was asking for the difference and advantages/disadvantages of each