How many CPU cycles are needed for each assembly instruction?

62,333

Solution 1

Modern CPUs are complex beasts, using pipelining, superscalar execution, and out-of-order execution among other techniques which make performance analysis difficult... but not impossible!

While you can no longer simply add together the latencies of a stream of instructions to get the total runtime, you can still get a (often) highly accurate analysis of the behavior of some piece of code (especially a loop) as described below and in other linked resources.

Instruction Timings

First, you need the actual timings. These vary by CPU architecture, but the best resource currently for x86 timings is Agner Fog's instruction tables. Covering no less than thirty different microarchitecures, these tables list the instruction latency, which is the minimum/typical time that an instruction takes from inputs ready to output available. In Agner's words:

Latency: This is the delay that the instruction generates in a dependency chain. The numbers are minimum values. Cache misses, misalignment, and exceptions may increase the clock counts considerably. Where hyperthreading is enabled, the use of the same execution units in the other thread leads to inferior performance. Denormal numbers, NAN's and infinity do not increase the latency. The time unit used is core clock cycles, not the reference clock cycles given by the time stamp counter.

So, for example, the add instruction has a latency of one cycle, so a series of dependent add instructions, as shown, will have a latency of 1 cycle per add:

add eax, eax
add eax, eax
add eax, eax
add eax, eax  # total latency of 4 cycles for these 4 adds

Note that this doesn't mean that add instructions will only take 1 cycle each. For example, if the add instructions were not dependent, it is possible that on modern chips all 4 add instructions can execute independently in the same cycle:

add eax, eax
add ebx, ebx
add ecx, ecx
add edx, edx # these 4 instructions might all execute, in parallel in a single cycle

Agner provides a metric which captures some of this potential parallelism, called reciprocal throughput:

Reciprocal throughput: The average number of core clock cycles per instruction for a series of independent instructions of the same kind in the same thread.

For add this is listed as 0.25 meaning that up to 4 add instructions can execute every cycle (giving a reciprocal throughput of 1 / 4 = 0.25).

The reciprocal throughput number also gives a hint at the pipelining capability of an instruction. For example, on most recent x86 chips, the common forms of the imul instruction have a latency of 3 cycles, and internally only one execution unit can handle them (unlike add which usually has four add-capable units). Yet the observed throughput for a long series of independent imul instructions is 1/cycle, not 1 every 3 cycles as you might expect given the latency of 3. The reason is that the imul unit is pipelined: it can start a new imul every cycle, even while the previous multiplication hasn't completed.

This means a series of independent imul instructions can run at up to 1 per cycle, but a series of dependent imul instructions will run at only 1 every 3 cycles (since the next imul can't start until the result from the prior one is ready).

So with this information, you can start to see how to analyze instruction timings on modern CPUs.

Detailed Analysis

Still, the above is only scratching the surface. You now have multiple ways of looking at a series of instructions (latency or throughput) and it may not be clear which to use.

Furthermore, there are other limits not captured by the above numbers, such as the fact that certain instructions compete for the same resources within the CPU, and restrictions in other parts of the CPU pipeline (such as instruction decoding) which may result in a lower overall throughput than you'd calculate just by looking at latency and throughput. Beyond that, you have factors "beyond the ALUs" such as memory access and branch prediction: entire topics unto themselves - you can mostly model these well, but it takes work. For example here's a recent post where the answer covers in some detail most of the relevant factors.

Covering all the details would increase the size of this already long answer by a factor of 10 or more, so I'll just point you to the best resources. Agner Fog has an Optimizing Asembly guide that covers in detail the precise analysis of a loop with a dozen or so instructions. See "12.7 An example of analysis for bottlenecks in vector loops" which starts on page 95 in the current version of the PDF.

The basic idea is that you create a table, with one row per instruction and mark the execution resources each uses. This lets you see any throughput bottlenecks. In addition, you need to examine the loop for carried dependencies, to see if any of those limit the throughput (see "12.16 Analyzing dependencies" for a complex case).

If you don't want to do it by hand, Intel has released the Intel Architecture Code Analyzer, which is a tool that automates this analysis. It currently hasn't been updated beyond Skylake, but the results are still largely reasonable for Kaby Lake since the microarchitecture hasn't changed much and therefore the timings remain comparable. This answer goes into a lot of detail and provides example output, and the user's guide isn't half bad (although it is out of date with respect to the newest versions).

Other sources

Agner usually provides timings for new architectures shortly after they are released, but you can also check out instlatx64 for similarly organized timings in the InstLatX86 and InstLatX64 results. The results cover a lot of interesting old chips, and new chips usually show up fairly quickly. The results are mostly consistent with Agner's, with a few exceptions here and there. You can also find memory latency and other values on this page.

You can even get the timing results directly from Intel in their IA32 and Intel 64 optimization manual in Appendix C: INSTRUCTION LATENCY AND THROUGHPUT. Personally I prefer Agner's version because they are more complete, often arrive before the Intel manual is updated, and are easier to use as they provide a spreadsheet and PDF version.

Finally, the x86 tag wiki has a wealth of resources on x86 optimization, including links to other examples of how to do a cycle accurate analysis of code sequences.

If you want a deeper look into the type of "dataflow analysis" described above, I would recommend A Whirlwind Introduction to Data Flow Graphs.

Solution 2

Given pipelining, out of order processing, microcode, multi-core processors, etc there's no guarantee that a particular section of assembly code will take exactly x CPU cycles/clock cycle/whatever cycles.

If such a reference exists, it will only be able to provide broad generalizations given a particular architecture, and depending on how the microcode is implemented you may find that the Pentium M is different than the Core 2 Duo which is different than the AMD dual core, etc.

Note that this article was updated in 2000, and written earlier. Even the Pentium 4 is hard to pin down regarding instruction timing - PIII, PII, and the original pentium were easier, and the texts referenced were probably based on those earlier processors that had a more well-defined instruction timing.

These days people generally use statistical analysis for code timing estimation.

Solution 3

What the other answers say about it being impossible to accurately predict the performance of code running on a modern CPU is true, but that doesn't mean the latencies are unknown, or that knowing them is useless.

The exact latencies for Intels and AMD's processors are listed in Agner Fog's instruction tables. See also Intel® 64 and IA-32 Architectures Optimization Reference Manual, and Instruction latencies and throughput for AMD and Intel x86 processors (from Can Berk Güder's now-deleted link-only answer). AMD also has pdf manuals on their own website with their official values.

For (micro-)optimizing tight loops, knowing the latencies for each instruction can help a lot in manually trying to schedule your code. The programmer can make a lot of optimizations that the compiler can't (because the compiler can't guarantee it won't change the meaning of the program).

Of course, this still requires you to know a lot of other details about the CPU, such as how deeply pipelined it is, how many instructions it can issue per cycle, number of execution units and so on. And of course, these numbers vary for different CPU's. But you can often come up with a reasonable average that more or less works for all CPU's.

It's worth noting though, that it is a lot of work to optimize even a few lines of code at this level. And it is easy to make something that turns out to be a pessimization. Modern CPUs are hugely complicated, and they try extremely hard to get good performance out of bad code. But there are also cases they're unable to handle efficiently, or where you think you're clever and making efficient code, and it turns out to slow the CPU down.

Edit Looking in Intel's optimization manual, table C-13: The first column is instruction type, then there is a number of columns for latency for each CPUID. The CPUID indicates which processor family the numbers apply to, and are explained elsewhere in the document. The latency specifies how many cycles it takes before the result of the instruction is available, so this is the number you're looking for.

The throughput columns show how many of this type of instructions can be executed per cycle.

Looking up xchg in this table, we see that depending on the CPU family, it takes 1-3 cycles, and a mov takes 0.5-1. These are for the register-to-register forms of the instructions, not for a lock xchg with memory, which is a lot slower. And more importantly, hugely-variable latency and impact on surrounding code (much slower when there's contention with another core), so looking only at the best-case is a mistake. (I haven't looked up what each CPUID means, but I assume the .5 are for Pentium 4, which ran some components of the chip at double speed, allowing it to do things in half cycles)

I don't really see what you plan to use this information for, however, but if you know the exact CPU family the code is running on, then adding up the latency tells you the minimum number of cycles required to execute this sequence of instructions.

Solution 4

Measuring and counting CPU-cycles does not make sense on the x86 anymore.

First off, ask yourself for which CPU you're counting cycles? Core-2? a Athlon? Pentium-M? Atom? All these CPUs execute x86 code but all of them have different execution times. The execution even varies between different steppings of the same CPU.

The last x86 where cycle-counting made sense was the Pentium-Pro.

Also consider, that inside the CPU most instructions are transcoded into microcode and executed out of order by a internal execution unit that does not even remotely look like a x86. The performance of a single CPU instruction depends on how much resources in the internal execution unit is available.

So the time for a instruction depends not only on the instruction itself but also on the surrounding code.

Anyway: You can estimate the throughput-resource usage and latency of instructions for different processors. The relevant information can be found at the Intel and AMD sites.

Agner Fog has a very nice summary on his web-site. See the instruction tables for latency, throughput, and uop count. See the microarchictecture PDF to learn how to interpret those.

http://www.agner.org/optimize

But note that xchg-with-memory does not have predictable performance, even if you look at only one CPU model. Even in the no-contention case with the cache-line already hot in L1D cache, being a full memory barrier will mean it's impact depends a lot on loads and stores to other addresses in the surrounding code.


Btw - since your example-code is a lock-free datastructure basic building block: Have you considered using the compiler built-in functions? On win32 you can include intrin.h and use functions such as _InterlockedExchange.

That'll give you better execution time because the compiler can inline the instructions. Inline-assembler always forces the compiler to disable optimizations around the asm-code.

Solution 5

lock xchg eax, dword ptr [edx]

Note the lock will lock memory for the memory fetch for all cores, this can take 100 cycles on some multi cores and a cache line will also need to be flushed. It will also stall the pipeline. So i wouldnt worry about the rest.

So optimal performance gets back to tuning your algorithms critical regions.

Note on a single core you can optmize this by removing the lock but it is needed for multi core.

Share:
62,333
George2
Author by

George2

Updated on July 05, 2022

Comments

  • George2
    George2 almost 2 years

    I heard there is Intel book online which describes the CPU cycles needed for a specific assembly instruction, but I can not find it out (after trying hard). Could anyone show me how to find CPU cycle please?

    Here is an example, in the below code, mov/lock is 1 CPU cycle, and xchg is 3 CPU cycles.

    // This part is Platform dependent!
    #ifdef WIN32
    inline int CPP_SpinLock::TestAndSet(int* pTargetAddress, 
                                                  int nValue)
    {
        __asm
        {
            mov edx, dword ptr [pTargetAddress]
            mov eax, nValue
            lock xchg eax, dword ptr [edx]
        }
        // mov = 1 CPU cycle
        // lock = 1 CPU cycle
        // xchg = 3 CPU cycles
    }
    
    #endif // WIN32
    

    BTW: here is the URL for the code I posted: http://www.codeproject.com/KB/threads/spinlocks.aspx

  • josesuero
    josesuero about 15 years
    Technically not entirely accurate. Each instruction does have a fixed duration/latency, as specified in Can Berk Güders answer. For the reasons you point out, this alone is only part of the story though. Knowing the latency of each instruction doesn't tell you when it gets scheduled.
  • Adam Davis
    Adam Davis about 15 years
    @jalf - True, but as you agree the ultimate effect of these things means that there's no easy way to calculate the real timing of the instruction in advance of running them.
  • George2
    George2 about 15 years
    @Adam, 1. I only need to know CPU cycle for a specific assemblt instruction, not "section of assembly code". 2. After learning your reply, I think for the same assembly instruction like mov, it will have different CPU cycles on different architecures, correct? (continued)
  • George2
    George2 about 15 years
    (continue) 3. are there any reference material, for example, for a specific model of Intel Processor, how many the CPU cycles are needed for a specific assembly instruction?
  • George2
    George2 about 15 years
    @jalf, Could you guide me to explain how to find how much CPU cycles needed for instruction like mov/xchg? I looked in documents recommended by Can, but feel confusing to find what exactly each columns mean in tables. Thanks.
  • George2
    George2 about 15 years
    @Nils, I think you mean for the overall elapsed time for an instruction, it varies dependent on system resource status and scheduling. But I think once the instruction is executing, it will be executed in fixed CPU cycles for a specific architecture, correct?
  • George2
    George2 about 15 years
    @Nils, the code sample is just for my leaning purpose to learn spin lock, for real programming practices, I will definitely use interlock functions.
  • George2
    George2 about 15 years
    @jalf, could you guide me to explain how to find how much CPU cycles needed for instruction like mov/xchg? I looked in documents recommended mentioned by others from Intel, but feel confusing to find what exactly each columns mean in tables. Thanks.
  • George2
    George2 about 15 years
    BTW: on agner.org where is the information shows CPU cycle needed for an assembly instruction? I looked some time in this site, but find nothing. Could you give 1-2 links please? :-)
  • josesuero
    josesuero about 15 years
    The latency columns show you how many cycles it takes from the instruction is initiated, until the result of it is available. Intel subdivides this into different CPUID's, to show the values for various families of CPU's xchg is listed as 1-3 cycles depending on CPU, and mov is 0.5-1.
  • josesuero
    josesuero about 15 years
    Edited my post to add these details
  • Adam Davis
    Adam Davis about 15 years
    @George2 - 1. Unfortunately, due to pipelining, out of order execution, and caching, a given assembly code will not always take the same amount of time. Even the documents provided by Can Berk Güder are statistical analysis.
  • Adam Davis
    Adam Davis about 15 years
    2. Yes, the same instruction will have different cycles on different architectures. Can Berk Güder's documents show this quite clearly - the Pentium M executes some instructions in fewer cycles than the newer Core 2, but it runs more slowly and has a shorter pipeline, so the throughput is lower.
  • Adam Davis
    Adam Davis about 15 years
    3. Can Berk Güder has provided good guides for this. If, after reading all this, you still don't understand how to use them, or why there's no simple way to determine the cpu cycle count, then there's not much more help we can provide. You're best bet is to run some tests.
  • Женя Борісевіч
    Женя Борісевіч about 12 years
    Does not answer the question.
  • Женя Борісевіч
    Женя Борісевіч about 12 years
    Does not answer the question.
  • Adam Davis
    Adam Davis about 12 years
    @Justicle The correct answer, for most programmers, is "run some tests and use statistical analysis". The answer for a processor architecture expert would fill a book. If you have specific critique for my answer, I'd welcome it. If you have a better answer, please post it. If you're trying to figure this stuff out yourself, consider posting a new question and we'll endeavor to help you understand the complexities of instruction timing.
  • Женя Борісевіч
    Женя Борісевіч about 12 years
    @AdamDavis stackoverflow.com/a/692727/94239 answers the question concisely as asked. The Intel guides do break down performance by model of processor (if you bother to look). Your answer is unhelpful to the learning environement of SO because it essentially says "don't even try".
  • Adam Davis
    Adam Davis about 12 years
    @Justicle I disagree. That answer provides the manuals one would look in to find the information, but it does not provide the information, or more importantly enough information to understand how to read the manual and find the information. I welcome you to read the manuals and provide the number of clock cycles those instructions will take on one of the processors in the Core line - your choice - and ignore the rest of the processors. If it's as simple as you say, and my answer is wrong, then you should be able to do so easily and quickly. Prove me wrong by providing an exact answer.
  • Peter Cordes
    Peter Cordes almost 7 years
    Last sentence is bogus: "then adding up the latency tells you the minimum number of cycles required to execute this sequence of instructions." No, because the two mov loads can run in parallel. Adding up latencies only works within a single dep chain, assuming no resource conflicts (execution ports being stolen by other instructions, delaying the critical path).
  • Ross Ridge
    Ross Ridge almost 7 years
    @PeterCordes It's even worse in the example case because the the XCHG instruction (with the redundant LOCK prefix) has huge unknown latency that makes any minimum based on charts pretty bogus.
  • BeeOnRope
    BeeOnRope almost 7 years
    This answer is far too pessimistic. The overall idea that you can't just add together the number of cycles to get a total latency is correct, but that doesn't mean you just throw up your hands and say that modern CPUs are a black box. In you just need to use a somewhat more complex model where instructions are nodes in a dependency graph, which have a latency and some throughput constraints shared with other instructions. Agners guides go over it in detail (and he has the numbers for each instruction) and Intel's IACA implements the concept in software. Additional caveats apply.
  • BeeOnRope
    BeeOnRope almost 7 years
    Counting and adding up instruction timings is valid, it just requires a more complex model than the past. In fact, for many loops without external factors such as L1 misses such counting can get you cycle accurate results, or nearly so.
  • Adam Davis
    Adam Davis almost 7 years
    @BeeOnRope Awesome! I look forward to your answer to this question, and will eagerly upvote it!
  • BeeOnRope
    BeeOnRope almost 7 years
    Well it's easier just to criticize your answer than try to cover the deep topic of cycle-level analysis of x86 code in an answer, but I gave it a shot at least.
  • Adam Davis
    Adam Davis almost 7 years
    @BeeOnRope that's a great answer! Thanks for putting in the time and effort. I'm going to recommend the op accept yours.
  • Peter Cordes
    Peter Cordes almost 7 years
    not 0.33 as you might expect if one imul completed every cycle. I think you didn't finish editing this. non-pipelined would be 3c inverse throughput. But if one completed every cycle, that would make the throughput 1.
  • BeeOnRope
    BeeOnRope almost 7 years
    @PeterCordes - thanks for the noted, I've edited it.
  • Peter Cordes
    Peter Cordes almost 7 years
    You don't say anything about ports. Instruction throughput numbers are not very useful if you don't know which ones compete for the same ports as others. The three main things to count are: latency of the critical path, fused-domain uop count, and total uops for each port. I added a section on that to a recent perf-analysis answer.
  • BeeOnRope
    BeeOnRope almost 7 years
    @PeterCordes It is intended to be covered by "certain instructions compete for the same execution units within the CPU", which uses "execution unit" to broadly cover all the capacity/specialization restrictions on scheduling such as ports, ALU/EUs, (those two being mostly interchangeable on recent archs), instruction-specific restrictions (lea for example). As I point out immediately after that, explaining how to do a full end-to-end analysis taking in all the factors would be very long and mostly just repeat other material that has already been prepared, some of which I link to.
  • Iwillnotexist Idonotexist
    Iwillnotexist Idonotexist almost 7 years
    @PeterCordes The LLVM guys apparently recently got intimate details from Intel about Sandy Bridge uop latencies and up, and the encoded knowledge will end up in LLVM's scheduler. We should watch this space: reviews.llvm.org/rL307529 "Also note that this patch will be followed by additional patches for the remaining target architectures HSW, IVB, BDW, SKL and SKX."
  • BeeOnRope
    BeeOnRope almost 7 years
    @IwillnotexistIdonotexist - I will watch it, but if it's just "each instruction latency, number of uOPs and used ports" then I hate to inform them that Agner has been providing that all along. Indeed, looking at the changes it seems that that's pretty much what it is: just bringing the machine model approximately up to date with what has been published since forever. It also looks like they are introducing some bugs, like around line 137 here they restrict a lot of operations to port 5 (e.g., logical ops) when the old definition seemed right.
  • Peter Cordes
    Peter Cordes almost 7 years
    @IwillnotexistIdonotexist: Cool, but it's weird that the first message makes a big deal out of adc m, r. Didn't LLVM know that it was 4 fused / 6 unfused uops already? It's been in Agner's stuff for years. (It is an interesting case, though. As @krazyglew explains, the uops from a single instruction can't depend on seeing the same TLB translation for the load and the store, so it has to do an extra ALU uop to get flags right in corner cases.)
  • BeeOnRope
    BeeOnRope almost 7 years
    @PeterCordes - based on my scan of their changes they are simply encoding what was available in Agner's stuff since forever. No real inside "scoop" (although perhaps there are some interesting finds if one were to go over it line-by-line).
  • Peter Cordes
    Peter Cordes almost 7 years
    @BeeOnRope: yeah, X86SchedSandyBridge.td:137 looks wrong for integer. (My browser expands the collapsed stuff after trying to go to the #label, but I think that's the line137 you were talking about.) It's only FP booleans that are limited to p5 in pre-SKL. If I'm reading it correctly, it seems to have vector integer shifts running on p5, too (when Agner says p0). Maybe someone looked at pslldq (which is a shuffle). IDK, it just makes no sense.
  • BeeOnRope
    BeeOnRope almost 7 years
    @PeterCordes yeah I was talking about the file X86SchedSandyBridge.td, line 136/7 - the stuff below the // Vector integer operations. comment. I didn't figure out how to actually link to a particular line. It seems wrong for shuffles which also had two ports in SnB, right?
  • Peter Cordes
    Peter Cordes almost 7 years
    Oh yeah, SnB runs integer shuffles (which don't have a 256b version) on 2 ports. Hmm, later in the same file, there's a lot of new lines, including ... (instregex "PSLLDri")>; in a port0 group. So I think it's sane after all.
  • Iwillnotexist Idonotexist
    Iwillnotexist Idonotexist almost 7 years
    @PeterCordes What I'm really hoping for is that some comment or commit message will leak out details we might not know, especially for the upcoming patches for later-generation architecture. But yes it's possible there won't be any real scoop beyond Agner Fog's crazy good work to begin with.
  • BeeOnRope
    BeeOnRope almost 7 years
    What I'd most like to see is some detailed modeling of branch prediction and detailed (i.e., how port binding works) uop scheduling. I'm not crossing my fingers though.
  • Iwillnotexist Idonotexist
    Iwillnotexist Idonotexist over 6 years
    @PeterCordes and BeeOnRope: Behold, the LLVM scheduler for Haswell was updated. It even gives breakdowns of how many uops each instruction generates and the set of ports those uops can be issued to.
  • BeeOnRope
    BeeOnRope over 6 years
    @IwillnotexistIdonotexist - huh. It's hard to tell exactly what changed since the diff is very large, but at least before they also had such port/uop breakdown in that file. Perhaps it has simply been made more accurate now. It would be nice to see the raw data shared by the "intel architects".
  • Dan Nissenbaum
    Dan Nissenbaum over 6 years
    Is there a comparable StackOverflow tag for x64?
  • BeeOnRope
    BeeOnRope over 6 years
    They x64 tag is x86-64 although I just use the x86 tag even for 64-bit code. @dan
  • maxschlepzig
    maxschlepzig about 4 years
    Regarding other sources: there is also uops.info/table.html - this interactive page certainly has a nice user interface