What’s the correct way to use printf to print a clock_t?
Solution 1
There seems to be no perfect way. The root of the problem is that clock_t
can be either integer or floating point.
clock_t can be a floating point type
As Bastien Léonard mentions for POSIX (go upvote him), C99 N1256 draft 7.23.1/3 also says that:
[clock_t is] arithmetic types capable of representing times
and 6.2.5/18:
Integer and floating types are collectively called arithmetic types.
and the standard defines arithmetic type as either integers or floating point types.
If you will divide by CLOCKS_PER_SEC, use long double
The return value of clock()
is implementation defined, and the only way to get standard meaning out of it is to divide by CLOCKS_PER_SEC
to find the number of seconds:
clock_t t0 = clock();
/* Work. */
clock_t t1 = clock();
printf("%Lf", (long double)(t1 - t0));
This is good enough, although not perfect, for the two following reasons:
there seems to be no analogue to
intmax_t
for floating point types: How to get the largest precision floating point data type of implemenation and its printf specifier? So if a larger floating point type comes out tomorrow, it could be used and break your implementation.if
clock_t
is an integer, the cast to float is well defined to use the nearest float possible. You may lose precision, but it would not matter much compared to the absolute value, and would only happen for huge amounts of time, e.g.long int
in x86 is the 80-bit float with 64-bit significant, which is millions of years in seconds.
Go upvote lemonad who said something similar.
If you suppose it is an integer, use %ju and uintmax_t
Although unsigned long long
is currently the largest standard integer type possible:
- a larger one could come out in the future
- the standard already explicitly allows larger implementation defined types (kudos to @FUZxxl) and
clock_t
could be one of them
so it is best to typecast to the largest unsigned integer type possible:
#include <stdint.h>
printf("%ju", (uintmax_t)(clock_t)1);
uintmax_t
is guaranteed to have the size of the largest possible integer size on the machine.
uintmax_t
and its printf specifier %ju
were introduced in c99 and gcc for example implements them.
As a bonus, this solves once and for all the question of how to reliably printf
integer types (which is unfortunately not the necessarily the case for clock_t
).
What could go wrong if it was a double:
- if too large to fit into the integer, undefined behavior
- much smaller than 1, will get rounded to 0 and you won't see anything
Since those consequences are much harsher than the integer to float conversion, using float is likely a better idea.
On glibc 2.21 it is an integer
The manual says that using double
is a better idea:
On GNU/Linux and GNU/Hurd systems, clock_t is equivalent to long int and CLOCKS_PER_SEC is an integer value. But in other systems, both clock_t and the macro CLOCKS_PER_SEC can be either integer or floating-point types. Casting CPU time values to double, as in the example above, makes sure that operations such as arithmetic and printing work properly and consistently no matter what the underlying representation is.
In glibc 2.21:
-
clock_t
islong int
:-
time/time.h sets it to
__clock_t
-
bits/types.h sets it to
__CLOCK_T_TYPE
-
bits/typesizes.h sets it to
__SLONGWORD_TYPE
-
bits/types.h sets it to
long int
-
time/time.h sets it to
-
clock()
in Linux is implemented withsys_clock_gettime
:-
sysdeps/unix/sysv/linux/clock.c calls
__clock_gettime
-
sysdeps/unix/clock_gettime.c calls
SYSDEP_GETTIME_CPU
-
sysdeps/unix/sysv/linux/clock_gettime.c calls
SYSCALL_GETTIME
which finally makes an inline system call
man clock_gettime
, tells us that it returns astruct timespec
which in GCC containslong int
fields.So the underlying implementation really returns integers.
-
sysdeps/unix/sysv/linux/clock.c calls
See also
- How to print types of unknown size like ino_t?
- How to use printf to display off_t, nlink_t, size_t and other special types?
Solution 2
As far as I know, the way you're doing is the best. Except that clock_t
may be a real type:
time_t
andclock_t
shall be integer or real-floating types.
http://www.opengroup.org/onlinepubs/009695399/basedefs/sys/types.h.html
Solution 3
It's probably because clock ticks is not a very well-defined unit. You can convert it to seconds and print it as a double:
time_in_seconds = (double)time_in_clock_ticks / (double)CLOCKS_PER_SEC;
printf("%g seconds", seconds);
The CLOCKS_PER_SEC macro expands to an expression representing the number of clock ticks in a second.
Solution 4
The C standard has to accomodate a wide variety of architectures, which makes it impossible to make any further guarantees aside from the fact that the internal clock type is arithmetic.
In most cases, you're interested in time intervals, so I'd convert the difference in clock ticks to milliseconds. An unsigned long
is large enough to represent an interval of nearly 50 days even if its 32bit, so it should be large enough for most cases:
clock_t start;
clock_t end;
unsigned long millis = (end - start) * 1000 / CLOCKS_PER_SEC;
Solution 5
One way is by using the gettimeofday
function. One can find the difference using this function:
unsigned long diff(struct timeval second, struct timeval first)
{
struct timeval lapsed;
struct timezone tzp;
unsigned long t;
if (first.tv_usec > second.tv_usec) {
second.tv_usec += 1000000;
second.tv_sec--;
}
lapsed.tv_usec = second.tv_usec - first.tv_usec;
lapsed.tv_sec = second.tv_sec - first.tv_sec;
t = lapsed.tv_sec*1000000 + lapsed.tv_usec;
printf("%lu,%lu - %lu,%lu = %ld,%ld\n",
second.tv_sec, second.tv_usec,
first.tv_sec, first.tv_usec,
lapsed.tv_sec, lapsed.tv_usec);
return t;
}
Spidey
I'm a software developer with experience with low level programming of embedded devices, having worked in the payment industry with payment terminals for the last decade. Worked as team and tech leader, being responsible for a team of 30 professionals. I like Linux, Windows, Vim, Git, GitLab, Jenkins, JIRA, GApps, C, Python, Make, CMake.
Updated on February 08, 2020Comments
-
Spidey about 4 years
I'm currently using a explicit cast to
unsigned long long
and using%llu
to print it, but sincesize_t
has the%z
specifier, why doesn'tclock_t
have one?There isn't even a macro for it. Maybe I can assume that on an x64 system (OS and CPU)
size_t
is 8 bytes in length (and even in this case, they have provided%z
), but what aboutclock_t
? -
Spidey almost 15 yearsVery well stated, I've forgotten about it possibly being a floating point type.
-
Spidey almost 15 yearsThat's exactly why I can't imagine a macro for clock_t printing format isn't specified in any headers.
-
Christoph almost 15 years@Spidey: but what should the output format be if you can't make any guesses on the representation? Remember, it's not specified if
clock_t
will be an integer or floating point value; if you want to do anything useful, you have to relate it toCLOCKS_PER_SEC
, and that's beyond the domain ofprintf()
-
Ciro Santilli OurBigBook.com almost 11 yearsIs there a way to get the floating point type with the largest precision possible and its specifier like can be done for integers with
uintmax_t
and%ju
? That would be the optimal way to go. -
Ciro Santilli OurBigBook.com almost 11 years@cirosantilli: it seems no
-
Victor over 10 years
%ju
? this prints exactlyju
-
Ciro Santilli OurBigBook.com over 10 years@Victor: Are you compiling with
gcc -std=c99
(or telling your compiler to use c99)? What is your compiler version? I have just tested it and the following works for me undergcc --version
equalsgcc (Ubuntu/Linaro 4.7.3-1ubuntu1) 4.7.3
:printf( "printf uintmax_t = %ju\n", (uintmax_t)1 );
. -
Victor over 10 yearsI am using Visual Studio 2010 :)
-
Ciro Santilli OurBigBook.com over 10 yearsIt seems that c99 is not supported by VS2010. Also read that MS has no plans for implementing it on their compilers (correct me if wrong). I'd stick with C++ for Windows programming.
-
Admin almost 9 yearsOn Windows, mingw-gcc uses the Microsoft library, and has thus the same limitation: it does not recognize the newer format specifiers. Still true in may 2015.
-
fuz almost 9 years
unsigned long long int
is not the largest possible integer type. Platforms may provide custom integer types of larger size. -
Ciro Santilli OurBigBook.com almost 9 years@FUZxxl I didn't know that the C standard explicitly allows, I will search the quote. But in my head I meant "defined by default in the C standard". With that addition, it would be correct? As you say, there are already extensions for 128 in GCC stackoverflow.com/questions/5381882/… which I did not know about.
-
fuz almost 9 yearsEven then you're not quite correct. Not much is said about the sizes of the types defined in standard header files. For instance, a
size_t
may be larger than along long int
just fine. -
Ciro Santilli OurBigBook.com almost 9 years@FUZxxl thanks, you have taught me something new today! After reading the standard, I have updated the answer to say "standard integer type" which is a term clearly defined on C99, and linked to a more precise explanation at: stackoverflow.com/a/30322474/895245
-
Ciro Santilli OurBigBook.com almost 9 yearsI would rather use 1)
long double
as it may have more precision 2) do a single typecast after the division:(long double)(time_in_clock_ticks / CLOCKS_PER_SEC)
to round only once. -
JohnH about 5 yearsPOSIX.1-2008 marks gettimeofday() as obsolete, which can cause needing certain preprocessor directives to be needed to get a compiler to reference them if you also use things like -DXOPEN_SOURCE=600 and so on.