Why would you use float over double, or double over long double?
Solution 1
In nearly all processors, "smaller" floating point numbers take the same or less clock-cycles in execution. Sometimes the difference isn't very big (or nothing), other times it can be literally twice the number of cycles for double
vs. float
.
Of course, memory foot-print, which is affecting cache-usage, will also be a factor. float
takes half the size of double
, and long double
is bigger yet.
Edit: Another side-effect of smaller size is that the processor's SIMD extensions (3DNow!, SSE, AVX in x86, and similar extensions are available in several other architectures) may either only work with float
, or can take twice as many float
vs. double
(and as far as I know, no SIMD instructions are available for long double
in any processor). So this may improve performance if float
is used vs. double
, by processing twice as much data in one go. End edit.
So, assuming 6-7 digits of precision is good enough for what you need, and the range of +/-10+/-38 is sufficient, then float
should be used. If you need either more digits in the number, or a bigger range, move to double
, and if that's not good enough, use long double
. But for most things, double
should be perfectly adequate.
Obviously, the importance of using "the right size" becomes more important when you have either lots of calculations, or lots of data to work with - if there are 5 variables, and you just use each a couple of times in a program that does a million other things, who cares? If you are doing fluid dynamics calculations for how well a Formula 1 car is doing at 200 mph, then you probably have several tens of million datapoints to calculate, and every data point needs to be calculated dozens of times per second of the cars travel, then using up just a few clockcycles extra in each calculation will make the whole simulation take noticeably longer.
Solution 2
There are two costs to using float, the obvious one of its limited range and precision, and, less obviously, the more difficult analysis those limitations impose.
It is often relatively easy to determine that double is sufficient, even in cases where it would take significant numerical analysis effort to show that float is sufficient. That saves development cost, and risk of incorrect results if the more difficult analysis is not done correctly.
Float's biggest advantage on many processors is its reduced memory footprint. That translates into more numbers per cache line, and more memory bandwidth in terms of numbers transferred per second. Any gain in compute performance is usually relatively slight - indeed, popular processors do all floating point arithmetic in one format that is wider than double.
It seems best to use double unless two conditions are met - there are enough numbers for their memory footprint to be a significant performance issue, and the developers can show that float is precise enough.
Solution 3
You might be interested in seeing the answer posted here Should I use double or float?
But it boils down to memory footprint vs the amount of precision you need for a given situation. In a physics engine, you might care more about precision, so it would make more sense to use a double or long double.
Bottom line: You should only use as much precision as you need for a given algorithm
Solution 4
The basic principle here would be don't use more than you need.
The first consideration is memory use, you probably realized that already, if you are making only one double no big deal, but what if you create a billion than you just used twice as much memory space as you had too.
Next is processor utilization, I believe on many processors if you use smaller data types it can do a form of threading where it does multiple operations at once.
So an extension to this part of the answer is SSE instructions basically this allows you to used packed data to do multiple floating point operations at once, which in an idealized case can double the speed of your program.
Lastly is readability, when someone is reading your code if you use a float they will immediately realize that you are not going over a certain number. IMO sometimes the right precision number will just flow better in the code.
Solution 5
A float uses less memory than a double, so if you don't need your number to be the size of a double, you might as well use a float since it will take up less memory.
Just like you wouldn't use a bus to drive yourself and a friend to the beach... you would be far better off going in a 2 seater car.
The same applies for a double over a long double... only reserve as much memory as you are going to need. Otherwise with more complex code you run the risk of using too much memory and having processes slow down or crash.
Related videos on Youtube
floatfil
Updated on February 07, 2020Comments
-
floatfil over 4 years
I'm still a beginner at programming and I always have more questions than our book or internet searches can answer (unless I missed something). So I apologize in advance if this was answered but I couldn't find it.
I understand that float has a smaller range than double making it less precise, and from what I understand, long double is even more precise(?). So my question is why would you want to use a variable that is less precise in the first place? Does it have something to do with different platforms, different OS versions, different compilers? Or are there specific moments in programming where its strategically more advantageous to use a float over a double/long double?
Thanks everyone!
-
chris almost 11 yearsYou don't lose precision when using functions that take a narrower form at least (or at least risk doing so and get a warning).
-
Captain Obvlious almost 11 yearsSometimes precision is less important than memory footprint.
-
-
aaronman almost 11 yearsDoesn't really answer the question just agrees with his statement
-
floatfil almost 11 yearsThanks a lot, this really helped me to understand the point of those data types. I just wish our books would specify as to why they use double in the beginner books (now I know) over float since those small programs never even came close to reaching the bounds of either data type. I'm just terrible at memorizing without fully understanding certain things.
-
aaronman almost 11 yearsFloat ops aren't necessarily faster than doubles, generally both take one cycle for mult
-
aaronman almost 11 yearsAlso u should have probably just marked the question as a dupe instead of posting the dupe as an answer
-
Mats Petersson almost 11 years@aaronman: Thanks. I try to take comments into account when they are sensible/correct. It's part of "learning" if nothing else. (Although in this case more of a "keep it simple", rather than enumerate all possible cases of equal and faster and slower variations, I just made one sentence that I thought was sufficiently good...)
-
aaronman almost 11 years@EricPostpischil I didn't say the latency was always the same, as for simd see my answer
-
aaronman almost 11 years@EricPostpischil your probably correct, not gonna start an argument with a lead SE at apple, how would u amend my comment
-
Eric Postpischil almost 11 years@aaronman: Well, comments on answers are generally for improving the answers. So I actually prefer to delete them after they have been addressed, to reduce clutter. If the answer should still be updated, I would enter a new comment describing the suggested update clearly, then delete the old comment. In this case, I do not think the answer needs to be specific about cycles consumed, since the subject is general guidance about floating-point execution time. For general purposes, it suffices to know that single might be faster than double, by slight amounts or great, depending on the situation.
-
aaronman almost 11 years@EricPostpischil I meant how to improve the answer because he already acted on my comment, all I meant by my comment is that generally FLO's will take equal amounts of time on doubles and floats unless you use SIMD instructions
-
MSalters almost 11 years@aaronman: Perhaps for
float*float
, butsin(float)
often is significantly faster. -
aka.nice almost 11 yearsDon't use more than you need: this is a valid point of view, but as underlined in Patricia's answer, how far can you degrade accuracy can be a difficult question, so you have to trade some costs (development vs runtime)... No premature optimization is an equally valid point of view.
-
aaronman almost 11 years@aka.nice I'm not sure what the point of this comment is, the question is why would you use float, not why would you use double
-
aaronman almost 11 years@MSalters I think I was misinterpreted, my point was that some instructions are equal in speed so if those instructions are the bulk of your code it might not matter
-
aka.nice almost 11 yearsyes sure, your points are valid reasons for using float, but they must be put in balance with other trade offs, that's what I tried to tell
-
aaronman almost 11 years@aka.nice I understand the question didn't ask for the other side, implying they already knew that side, so I assumed it was not necessary to reiterate
-
Ruslan about 5 yearsMay be worth noting that
long double
is often not more precise thandouble
: e.g. on MS compilers it's the same 64-bit date type asdouble
(although distinct from the POV of function overloading etc.).