PI and accuracy of a floating-point number
Solution 1
#include <stdio.h>
#define E_PI 3.1415926535897932384626433832795028841971693993751058209749445923078164062
int main(int argc, char** argv)
{
long double pild = E_PI;
double pid = pild;
float pif = pid;
printf("%s\n%1.80f\n%1.80f\n%1.80Lf\n",
"3.14159265358979323846264338327950288419716939937510582097494459230781640628620899",
pif, pid, pild);
return 0;
}
Results:
[quassnoi #] gcc --version
gcc (GCC) 4.3.2 20081105 (Red Hat 4.3.2-7)
[quassnoi #] ./test
3.14159265358979323846264338327950288419716939937510582097494459230781640628620899
3.14159274101257324218750000000000000000000000000000000000000000000000000000000000
^
3.14159265358979311599796346854418516159057617187500000000000000000000000000000000
^
3.14159265358979311599796346854418516159057617187500000000000000000000000000000000
^
0000000001111111
1234567890123456
Solution 2
When I examined Quassnoi's answer it seemed suspicious to me that long double
and double
would end up with the same accuracy so I dug in a little. If I ran his code compiled with clang I got the same results as him. However I found out that if I specified the long double
suffix and used a literal to initialize the long double it provided more precision. Here is my version of his code:
#include <stdio.h>
int main(int argc, char** argv)
{
long double pild = 3.14159265358979323846264338327950288419716939937510582097494459230781640628620899L;
double pid = pild;
float pif = pid;
printf("%s\n%1.80f\n%1.80f\n%1.80Lf\n",
"3.14159265358979323846264338327950288419716939937510582097494459230781640628620899",
pif, pid, pild);
return 0;
}
And the results:
3.14159265358979323846264338327950288419716939937510582097494459230781640628620899
3.14159274101257324218750000000000000000000000000000000000000000000000000000000000
^
3.14159265358979311599796346854418516159057617187500000000000000000000000000000000
^
3.14159265358979323851280895940618620443274267017841339111328125000000000000000000
^
Solution 3
6 places and 14 places.1 place is over 0 for the 3, and the last place although stored can't be considered as a precision point.
And sorry but I don't know what extended means without more context. Do you mean C#'s decimal?
Solution 4
Accuracy of a floating-point type is not related to PI or any specific numbers. It only depends on how many digits are stored in memory for that specific type.
In case of IEEE-754 float
uses 23 bits of mantissa so it can be accurate to 23+1 bits of precision, or ~7 digits of precision in decimal. Regardless of π, e, 1.1, 9.87e9... all of them is stored with exactly 24 bits in a float. Similarly double
(53 bits of mantissa) can store 15~17 decimal digits of precision.
Solution 5
Print and count, baby, print and count. (Or read the specs.)
Admin
Updated on October 15, 2021Comments
-
Admin over 2 years
In order for my projects algorithm to work I have to name a boolean the value of the String. For example:
String check="Hello"; Boolean check=true; if(some condition){ Hello=false; }
//Where the ID of check is "Hello" as defined in the string, I know this is not the way about it which I displayed as an example, Want to know the correct way to implement it.
Update-:
If you want to use it as a boolean to check if it is true:
if(Hashmapboolean.get(StringValue)){ //Action }
This is how you check if the Hashmap shown in the answer is true.
-
rmeador about 15 yearsinteresting test... unfortunately, I bet it's all sorts of system dependent :P
-
Quassnoi about 15 yearsSure, that's why I put gcc --version there
-
Quassnoi about 15 yearsI used math.h only for M_PI constant, I think it should be same in every version, it's a PI, after all :) Anyway, I updated the code not to use math.h
-
Admin about 15 yearsPlease see "An Informal Description of IEEE754" cse.ttu.edu.tw/~jmchen/NM/refs/story754.pdf
-
fredoverflow almost 12 years@Hrushikesh The link is dead :( But I have found a working link.
-
thephred about 10 yearsThis appears to be compiler and architecture dependent however: en.wikipedia.org/wiki/Long_double
-
Madcowswe about 9 yearsThis test is invalid for the extended precision result, because your #define literal for pi is in double precision. You need it to be an extended precision literal. See this.
-
phuclv about 7 yearsthe
E_PI
must haveL
suffix to get long double precision, otherwise it'll stuck at double precision -
Admin almost 6 yearsSounds like a good solution, will try it out and update the answer.
-
Admin almost 6 yearsI'm not able to do this-:
if(myMap){ //Action }
With the boolean -
Admin almost 6 yearsI even tried
if(myMap.get(true)){ //Some Action }
...could..Can't I use it like A boolean? -
phuclv about 5 years
__sinpi()
and__cospi()
are definitely not standard functions. It's easy to see as they have the__
prefix. Searching for them mostly returns result for macOS and iOS. This question said that it's been added by Apple Implementation of sinpi() and cospi() using standard C math library, and the man page also says that it's in OSX -
Cal-linux about 5 yearsYour logic / conclusion is actually incorrect. It is related to the specific value; the binary representation of floating-points have a fixed number of bits for mantissa, but depending on the exponent, some of those bits will be used on representing the integer portion, or the decimals portion. An example that helps visualize this: you store pi in a
double
and it will be accurate up to the 15th decimal (at least for the gcc that comes with Ubuntu 18, running on an intel core i5 --- I believe it's mapped to IEEE-754). You store 1000*pi, and it will be accurate up to the 12th decimal. -
phuclv about 5 years@Cal-linux you're mistaking the precision of a type vs the error after doing operations. If you do
1000*pi
and got a slightly less accurate result, that doesn't mean the precision was reduced. You got it wrong because you don't understand what "significand" is, which isn't counted after the radix point. In fact 1000*pi lose only 1 digit of precision and is still correct to the 15th digit of significand, not 12. You're also confusing between 'precision' and 'accuracy'? -
phuclv about 5 yearsand if you have the exact 1000pi constant instead of doing it through the multiplication during runtime you'll still get exactly 53 bits of precision
-
Cal-linux about 5 yearsyou're still getting it wrong. It is a well-known aspect of floating points, that the accuracy/error in the representation is unevenly distributed across the range; you can distinguish between 0.1 and 0.1000001, but not between 10^50 and (0.0000001 + 10^50). FP stores a value as x times 2^_y_, where x uses a given number of bits to represent a value between 1 and 2 (or was it between 0 and 1?? I forget now), and y has a range given by the number of bits assigned to it. If y is large, the accuracy of x is mostly consumed by the integer part.
-
Cal-linux about 5 yearsAs for the exact 1000pi as a constant --- you may get the same 53 bits of precision, but that's not what the thread is about: you get the same 16 correct decimal digits at the beginning; but now three out of those 16 are used for the integer part, 3141 --- the decimal places are correct up to the 89793, exactly as with pi; except that in pi, that 3 in 89793 is the 15th decimal, whereas in 1000pi, it is the 12th decimal!
-
phuclv about 5 years@Cal-linux I'm well aware that the error and the distance between consecutive values scale according to the exponent, but it's irrelevant here. And the OP didn't ask about the decimal numbers after 1000pi
-
Cal-linux about 5 years"And the OP didn't ask about the decimal numbers after 1000pi" -- no, but it is directly relevant; the OP asked how many decimal places of pi are correctly represented by a FP. You argued that the actual value has no relevance --- which is incorrect: for larger values, you get smaller amount of decimal places that are correctly represented. 1000pi is just an example to illustrate this; I'm still focusing, as the OP requested, on the number of decimal places, which what your argument gets wrong.
-
Olof Forshell almost 5 yearsFor the fraction part of a floating point it is mostly incorrect to use the term decimal digits. It is correct sometimes, such as for 0.25 that is exactly representable in base 10 (as we are all familiar with) and in base 2 (2^-2). 0.1 is exact in base 10 but (because it can't be exactly represented) it will be an approximation in base 2 i e in the fraction part of an iEEE-754 floating-point number. 1/3 is an example of a number that cannot be exactly represented in either base.