C++ windows time

14,507

Solution 1

The "canonical" answer was given by unwind :

One popular way is using the QueryPerformanceCounter() call.

There are however few problems with this method:

  1. it's intended for measurement of time intervals, not time. This means you have to write code to establish "epoch time" from which you will measure precise intervals. This is called calibration.
  2. As you calibrate your clock, you also need to periodically adjust it so it's never too much out of sync (this is called drift) with your system clock.
  3. QueryPerformanceCounter is not implemented in user space; this means context switch is needed to call kernel side of implementation, and that is relatively expensive (around 0.7 microsecond). This seems to be required to support legacy hardware.

Not all is lost, though. Points 1. and 2. are something you can do with a bit of coding, 3. can be replaced with direct call to RDTSC (available in newer versions of Visual C++ via __rdtsc() intrinsic), as long as you know accurate CPU clock frequency. Although, on older CPUs, such call would be susceptible to changes in cpu internal clock speed, in all newer Intel and AMD CPUs it is guaranteed to give fairly accurate results and won't be affected by changes in CPU clock (e.g. power saving features).

Lets get started with 1. Here is data structure to hold calibration data:

struct init
{
  long long stamp; // last adjustment time
  long long epoch; // last sync time as FILETIME
  long long start; // counter ticks to match epoch
  long long freq;  // counter frequency (ticks per 10ms)

  void sync(int sleep);
};

init                  data_[2] = {};
const init* volatile  init_ = &data_[0];

Here is code for initial calibration; it has to be given time (in milliseconds) to wait for the clock to move; I've found that 500 milliseconds give pretty good results (the shorter time, the less accurate calibration). For the purpose of callibration we are going to use QueryPerformanceCounter() etc. You only need to call it for data_[0], since data_[1] will be updated by periodic clock adjustment (below).

void init::sync(int sleep)
{
  LARGE_INTEGER t1, t2, p1, p2, r1, r2, f;
  int cpu[4] = {};

  // prepare for rdtsc calibration - affinity and priority
  SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_TIME_CRITICAL);
  SetThreadAffinityMask(GetCurrentThread(), 2);
  Sleep(10);

  // frequency for time measurement during calibration
  QueryPerformanceFrequency(&f);

  // for explanation why RDTSC is safe on modern CPUs, look for "Constant TSC" and "Invariant TSC" in
  // Intel(R) 64 and IA-32 Architectures Software Developer’s Manual (document 253668.pdf)

  __cpuid(cpu, 0); // flush CPU pipeline
  r1.QuadPart = __rdtsc();
  __cpuid(cpu, 0);
  QueryPerformanceCounter(&p1);

  // sleep some time, doesn't matter it's not accurate.
  Sleep(sleep);

  // wait for the system clock to move, so we have exact epoch
  GetSystemTimeAsFileTime((FILETIME*) (&t1.u));
  do
  {
    Sleep(0);
    GetSystemTimeAsFileTime((FILETIME*) (&t2.u));
    __cpuid(cpu, 0); // flush CPU pipeline
    r2.QuadPart = __rdtsc();
  } while(t2.QuadPart == t1.QuadPart);

  // measure how much time has passed exactly, using more expensive QPC
  __cpuid(cpu, 0);
  QueryPerformanceCounter(&p2);

  stamp = t2.QuadPart;
  epoch = t2.QuadPart;
  start = r2.QuadPart;

  // calculate counter ticks per 10ms
  freq = f.QuadPart * (r2.QuadPart-r1.QuadPart) / 100 / (p2.QuadPart-p1.QuadPart);

  SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_NORMAL);
  SetThreadAffinityMask(GetCurrentThread(), 0xFF);
}

With good calibration data you can calculate exact time from cheap RDTSC (I measured the call and calculation to be ~25 nanoseconds on my machine). There are three things to note:

  1. return type is binary compatible with FILETIME structure and is precise to 100ns , unlike GetSystemTimeAsFileTime (which increments in 10-30ms or so intervals, or 1 millisecond at best).

  2. in order to avoid expensive conversions integer to double to integer, the whole calculation is performed in 64 bit integers. Even though these can hold huge numbers, there is real risk of integer overflow, and so start must be brought forward periodically to avoid it. This is done in clock adjustment.

  3. we are making a copy of calibration data, because it might have been updated during our call by clock adjustement in another thread.

Here is the code to read current time with high precision. Return value is binary compatible with FILETIME, i.e. number of 100-nanosecond intervals since Jan 1, 1601.

long long now()
{
  // must make a copy
  const init* it = init_;
  // __cpuid(cpu, 0) - no need to flush CPU pipeline here
  const long long p = __rdtsc();
  // time passed from epoch in counter ticks
  long long d = (p - it->start);
  if (d > 0x80000000000ll)
  {
    // closing to integer overflow, must adjust now
    adjust();
  }
  // convert 10ms to 100ns periods
  d *= 100000ll;
  d /= it->freq;
  // and add to epoch, so we have proper FILETIME
  d += it->epoch;
  return d;
}

For clock adjustment, we need to capture exact time (as provided by system clock) and compare it against our clock; this will give us drift value. Next we use simple formula to calculate "adjusted" CPU frequency, to make our clock meet system clock at the time of next adjustment. Thus it is important that adjustments are called on regular intervals; I've found that it works well when called in 15 minutes intervals. I use CreateTimerQueueTimer, called once at program startup to schedule adjustment calls (not demonstrated here).

The slight problem with capturing accurate system time (for the purpose of calculating drift) is that we need to wait for the system clock to move, and that can take up to 30 milliseconds or so (it's a long time). If adjustment is not performed, it would risk integer overflow inside function now(), not to mention uncorrected drift from system clock. There is builtin protection against overflow in now(), but we really don't want to trigger it synchronously in a thread which happened to call now() at the wrong moment.

Here is the code for periodic clock adjustment, clock drift is in r->epoch - r->stamp:

void adjust()
{
  // must make a copy
  const init* it = init_;
  init* r = (init_ == &data_[0] ? &data_[1] : &data_[0]);
  LARGE_INTEGER t1, t2;

  // wait for the system clock to move, so we have exact time to compare against
  GetSystemTimeAsFileTime((FILETIME*) (&t1.u));
  long long p = 0;
  int cpu[4] = {};
  do
  {
    Sleep(0);
    GetSystemTimeAsFileTime((FILETIME*) (&t2.u));
    __cpuid(cpu, 0); // flush CPU pipeline
    p = __rdtsc();
  } while (t2.QuadPart == t1.QuadPart);

  long long d = (p - it->start);
  // convert 10ms to 100ns periods
  d *= 100000ll;
  d /= it->freq;

  r->start = p;
  r->epoch = d + it->epoch;
  r->stamp = t2.QuadPart;

  const long long dt1 = t2.QuadPart - it->epoch;
  const long long dt2 = t2.QuadPart - it->stamp;
  const double s1 = (double) d / dt1;
  const double s2 = (double) d / dt2;

  r->freq = (long long) (it->freq * (s1 + s2 - 1) + 0.5);

  InterlockedExchangePointer((volatile PVOID*) &init_, r);

  // if you have log output, here is good point to log calibration results
}

Lastly two utility functions. One will convert FILETIME (including output from now()) to SYSTEMTIME while preserving microseconds to separate int. Other will return frequency, so your program can use __rdtsc() directly for accurate measurements of time intervals (with nanosecond precision).

void convert(SYSTEMTIME& s, int &us, long long f)
{
  LARGE_INTEGER i;
  i.QuadPart = f;
  FileTimeToSystemTime((FILETIME*) (&i.u), &s);
  s.wMilliseconds = 0;
  LARGE_INTEGER t;
  SystemTimeToFileTime(&s, (FILETIME*) (&t.u));
  us = (int) (i.QuadPart - t.QuadPart)/10;
}

long long frequency()
{
  // must make a copy
  const init* it = init_;
  return it->freq * 100;
}

Well of course none of the above is more accurate than your system clock, which is unlikely to be more accurate than few hundred milliseconds. The purpose of precise clock (as opposed to accurate) as implemented above, is to provide single measure which can be used for both:

  1. cheap and very accurate measurement of time intervals (not wall time),
  2. much less accurate, but monotonous and consistent with the above, measure of wall time

I think it does it pretty well. Example use are logs, where one can use timestamps not only to find time of events, but also reason about internal program timings, latency (in microseconds) etc.

I leave the plumbing (call to initial calibration, scheduling adjustment) as an exercise for gentle readers.

Solution 2

You can use boost date time library.

You can use boost::posix_time::hours, boost::posix_time::minutes, boost::posix_time::seconds, boost::posix_time::millisec, boost::posix_time::nanosec

http://www.boost.org/doc/libs/1_39_0/doc/html/date_time.html

Solution 3

One popular way is using the QueryPerformanceCounter() call. This is useful if you need high-precision timing, such as for measuring durations that only take on the order of microseconds. I believe this is implemented using the RDTSC machine instruction.

There might be issues though, such as the counter frequency varying with power-saving, and synchronization between multiple cores. See the Wikipedia link above for details on these issues.

Solution 4

Take a look at the Windows APIs GetSystemTime() / GetLocalTime() or GetSystemTimeAsFileTime().

GetSystemTimeAsFileTime() expresses time in 100 nanosecond intervals, that is 1/10 of a microsecond. All functions provide the current time with in millisecond accuracy.

EDIT:

Keep in mind, that on most Windows systems the system time is only updated about every 1 millisecond. So even representing your time with microsecond accuracy makes it still necessary to acquire the time with such a precision.

Solution 5

Take a look at this: http://www.decompile.com/cpp/faq/windows_timer_api.htm

Share:
14,507
Boris Raznikov
Author by

Boris Raznikov

Updated on July 24, 2022

Comments

  • Boris Raznikov
    Boris Raznikov almost 2 years

    I have a problem in using time. I want to use and get microseconds on windows using C++.

    I can't find the way.