How to measure function running time in Qt?

10,761

Solution 1

You could also try to use the QElapsedTimer:

QElapsedTimer timer;
timer.start();

slowOperation1();

qDebug() << "The slow operation took" << timer.elapsed() << "milliseconds";
qDebug() << "The slow operation took" << timer.nsecsElapsed() << "nanoseconds";

Documentation of QElapsed Timer

Solution 2

clock() isn't accurate for measuring time spend in functions. It just returns number of ticks for whole program while its on CPU rightnow, it doesn't count blocking IO operations or sleeps. It just counts ticks which your program is running on CPU (processing). If you put sleep in your code you will loose CPU and this time isn't counting with clock(). You have to use time() or gettimeofday() or more accurate rdtsc assembly instruction.

Lookat these questions :

clock() accuracy

Why is CLOCKS_PER_SEC not the actual number of clocks per second?

In Qt sources, you will see the Qt has used gettimeofday for implementing QTime::currentTime() under Unix https://github.com/radekp/qt/blob/master/src/corelib/tools/qdatetime.cpp : line 1854

Share:
10,761
Bobur
Author by

Bobur

Updated on June 21, 2022

Comments

  • Bobur
    Bobur almost 2 years

    I am calling argon2 - memory intensive hashing function in Qt and measuring its running time:

    ...
    QTime start = QTime::currentTime();
    // call hashing function
    QTime finish = QTime::currentTime();
    time = start.msecsTo(finish) / 1000.0;
    ...
    

    In argon2 library's test case, time is measured in another way:

    ...
    clock_t start = clock();
    // call hashing function
    clock_t finish = clock();
    time = ((double)finish - start) / CLOCKS_PER_SEC;
    ...
    

    I am calling the function exactly as they call in their test case. But I am getting a twice bigger number (twice slower). Why? How to measure function running time in Qt? What clock() actually measures?

    env:virtualBox, Ubuntu14.04 64bit, Qt5.2.1, Qt Creator 3.0.1.

  • dtech
    dtech over 7 years
    That's largely untrue. It was true a long time ago when CPUs ran at a fixed frequency. Today they don't, and clock() is implemented in a different way, and is fairly accurate if you can settle with millisecond resolution.
  • e.jahandar
    e.jahandar over 7 years
    @ddriver C says clock "returns the implementation’s best approximation to the processor time used by the program since the beginning of an implementation-defined era related only to the program invocation. stackoverflow.com/a/9871772/4490542
  • Bobur
    Bobur over 7 years
    @e.jahandar If I run two functions during say 1 second, first CPU intensive (makes a lot of computations) and another memory bound function (makes a lot of memory block read/write), and measure their time with clock(), do you mean that clock() time for CPU intensive function shows longer time than for the 2nd function even if actual running time is the same?
  • Bobur
    Bobur over 7 years
    @e.jahandar One more thing, You suggested "I have to use time() or gettimeofday()" but I am using QTime::currentTime() which uses (as you mentioned) gettimeofday(). Then it seems I doing the right thing, isn't it?
  • Bobur
    Bobur over 7 years
    And typo at the most end, it's the line 1845
  • e.jahandar
    e.jahandar over 7 years
    @Bobur as you said, clock() doesn't cound non-cpu operations, like IO operations not memory, memory operations like cache misses are counted. QTime::curentTime() is right thing