C++/POSIX how to get a millisecond time-stamp the most efficient way?

11,578

Solution 1

Its clear that your example call uses most CPU time in timer_nowtime() function. You are polling, and the loop eats your CPU time. You could exchange the timer function with a better alternative and so you may achieve more loop iterations, but it will still use most CPU time in that function! You will not achieve using less CPU time by changing your timer function!

You may change your loop and introduce wait times - but only if it makes sense in your application, e.g.:

start = timer_nowtime();
while( i2c_CheckBit(dev) ) {
    now = timer_nowtime();
    diff = now - start;
    if( diff < I2C_TIMEOUT ) break;
    else if( diff > SOME_TRESHOLD ) usleep( 1000*std::max(1,I2C_TIMEOUT-diff-SOME_SMALL_NUMBER_1_TO_10_MAYBE) );
}

The timer: I think gettimeofday() would be a good decision, it has high precision and is available in most (all?) Unices.

Solution 2

Note that your two examples are not equivalent; times(2) measures CPU time consumptions, whereas clock_gettime(CLOCK_REALTIME, ...) measures the wallclock time.

That being said, wallclock timers such as clock_gettime or the older gettimeofday are usually much faster than CPU timers such as times() or getrusage().

And, as already noted, your timing function is using up much CPU time because you do little else than polling it. If that is a problem, wait a little bit e.g. by calling nanosleep() or clock_nanosleep().

I think your best option is to use clock_gettime(), however with CLOCK_MONOTONIC instead of CLOCK_REALTIME.

Solution 3

If your program only executes on recent Intel/AMD processors, but not too recent (clock throttling is not handled very well), the RDTSC assembly instruction is the best way to get a timestamp. The resolution is close to the actual clock of the processor and it is independent from the system (this means it is biased by interrupts, too. You can't have your cake and eat it too).

This page has an example in C.

Solution 4

I have been happy with this on Linux:

inline double getFractionalSeconds(void) {
   struct timeval tv;   // see gettimeofday(2)
   gettimeofday(&tv, NULL);
   double t = (double) tv.tv_sec + (double) 1e-6 * tv.tv_usec; 
   // return seconds.microseconds since epoch 
   return(t);
}
Share:
11,578
Maus
Author by

Maus

Embedded Developer. Once I worked for a Startup, then for Siemens, then for the environment and finally I ended up as proffessional politician. Personal Blog: quirk.ch

Updated on June 15, 2022

Comments

  • Maus
    Maus almost 2 years

    I'm using a open-source library for i2c bus actions. This library frequently uses a function to obtain an actual time-stamp with millisecond resolution.

    Example Call:

    nowtime = timer_nowtime();
    while ((i2c_CheckBit(dev) == true) && ((timer_nowtime() - nowtime) < I2C_TIMEOUT));
    

    The application using this i2c library uses a lot of CPU capacity. I figured out, that the running program the most time is calling the function timer_nowtime().

    The original function:

    unsigned long timer_nowtime(void) {        
        static bool usetimer = false;
        static unsigned long long inittime;
        struct tms cputime;
    
        if (usetimer == false)
        {
            inittime  = (unsigned long long)times(&cputime);
            usetimer = true;
        }
    
        return (unsigned long)((times(&cputime) - inittime)*1000UL/sysconf(_SC_CLK_TCK));
    }
    

    My aim now is, to improve the efficiency of this function. I tried it this way:

    struct timespec systemtime;
    
    clock_gettime(CLOCK_REALTIME, &systemtime);
    //convert the to milliseconds timestamp
    // incorrect way, because (1 / 1000000UL) always returns 0 -> thanks Pace
    //return (unsigned long) ( (systemtime.tv_sec * 1000UL) + (systemtime.tv_nsec
    //              * (1 / 1000000UL)));
    return (unsigned long) ((systemtime.tv_sec * 1000UL)
                + (systemtime.tv_nsec / 1000000UL));
    

    Unfortunately, I can't declare this function inline (no clue why).

    Which way is more efficient to obtain an actual timestamp with ms resolution? I'm sure there is a more per-formant way to do so. Any suggestions?

    thanks.