What does "real*8" mean?

66,050

Solution 1

The 8 refers to the number of bytes that the data type uses.

So a 32-bit integer is integer*4 along the same lines.

A quick search found this guide to Fortran data types, which includes:

The "real*4" statement specifies the variable names to be single precision 4-byte real numbers which has 7 digits of accuracy and a magnitude range of 10 from -38 to +38. The "real" statement is the same as "real*4" statement in nearly all 32-bit computers.

and

The "real*8" statement specifies the variable names to be double precision 8-byte real numbers which has 15 digits of accuracy and a magnitude range of 10 from -308 to +308. The "double precision" statement is the same as "real*8" statement in nearly all 32-bit computers.

Solution 2

There are now at least 4 ways to specify precision in Fortran.

As already answered, real*8 specifies the number of bytes. It is somewhat obsolete but should be safe.

The new way is with "kinds". One should use the intrinsic functions to obtain the kind that has the precision that you need. Specifying the kind by specific numeric value is risky because different compilers use different values.

Yet another way is to use the named types of the ISO_C_Binding. This question discusses the kind system for integers -- it is very similar for reals.

Solution 3

The star notation (as TYPE*n is called) is a non-standard Fortran construct if used with TYPE other than CHARACTER.

If applied to character type, it creates an array of n characters (or a string of n characters).

If applied to another type, it specifies the storage size in bytes. This should be avoided at any cost in Fortran 90+, where the concept of type KIND is introduced. Specifying storage size creates non-portable applications.

Share:
66,050
Andrew
Author by

Andrew

Updated on September 07, 2020

Comments

  • Andrew
    Andrew over 3 years

    The manual of a program written in Fortran 90 says, "All real variables and parameters are specified in 64-bit precision (i.e. real*8)."

    According to Wikipedia, single precision corresponds to 32-bit precision, whereas double precision corresponds to 64-bit precision, so apparently the program uses double precision.

    But what does real*8 mean?

    I thought that the 8 meant that 8 digits follow the decimal point. However, Wikipedia seems to say that single precision typically provides 6-9 digits whereas double precision typically provides 15-17 digits. Does this mean that the statement "64-bit precision" is inconsistent with real*8?