Why are complicated memcpy/memset superior?

10,181

Solution 1

One thing I'd like to add to the other answers is that rep movs is not actually slow on all modern processors. For instance,

Usually, the REP MOVS instruction has a large overhead for choosing and setting up the right method. Therefore, it is not optimal for small blocks of data. For large blocks of data, it may be quite efficient when certain conditions for alignment etc. are met. These conditions depend on the specific CPU (see page 143). On Intel Nehalem and Sandy Bridge processors, this is the fastest method for moving large blocks of data, even if the data are unaligned.

[Highlighting is mine.] Reference: Agner Fog, Optimizing subroutines in assembly language An optimization guide for x86 platforms. ,p. 156 (and see also section 16.10, p. 143) [version of 2011-06-08].

Solution 2

Cost.

(Note that memcpy is coming to ARM, see below.)

The cost of optimizing memcpy in your C library is fairly minimal, maybe a few weeks of developer time here and there. You'll have to make a new version every several years or so when processor features change enough to warrant a rewrite. For example, GNU's glibc and Apple's libSystem both have a memcpy which is specifically optimized for SSE3.

The cost of optimizing in hardware is much higher. Not only is it more expensive in terms of developer costs (designing a CPU is vastly more difficult than writing user-space assembly code), but it would increase the transistor count of the processor. That could have a number of negative effects:

  • Increased power consumption
  • Increased unit cost
  • Increased latency for certain CPU subsystems
  • Lower maximum clock speed

In theory, it could have an overall negative impact on both performance and unit cost.

Maxim: Don't do it in hardware if the software solution is good enough.

But, memcpy is coming to ARM. As processors get larger and larger, the incremental cost of adding more instructions gets lower and lower, relative to the existing cost of a core. From Arm A-Profile Architecture Developments 2021:

To address these concerns the 2021 extensions introduce new instructions specifically targeting the memcpy() and memset() family of functions.

The key concerns mentioned are that the complicated software implementations of memcpy, while fast, may need to be rewritten for different microarchitectures in order to get better performance. They also need to account for alignment and size in different ways. Having a fast hardware implementation means that memcpy can be inlined and get good performance across different microarchitectures.

Note: The bug you've cited is not really a bug in glibc w.r.t. the C specification. It's more complicated. Basically, the glibc folks say that memcpy behaves exactly as advertised in the standard, and some other folks are complaining that memcpy should be aliased to memmove.

Time for a story: It reminds me of a complaint that a Mac game developer had when he ran his game on a 603 processor instead of a 601 (this is from the 1990s). The 601 had hardware support for unaligned loads and stores with minimal performance penalty. The 603 simply generated an exception; by offloading to the kernel I imagine the load/store unit could be made much simpler, possibly making the processor faster and cheaper in the process. The Mac OS nanokernel handled the exception by performing the required load/store operation and returning control to the process.

But this developer had a custom blitting routine to write pixels to the screen which did unaligned loads and stores. Game performance was fine on the 601 but abominable on the 603. Most other developers didn't notice if they used Apple's blitting function, since Apple could just reimplement it for newer processors.

The moral of the story is that better performance comes both from software and hardware improvements.

In general, the trend seems to be in the opposite direction from the kind of hardware optimizations mentioned. While in x86 it's easy to write memcpy in assembly, some newer architectures offload even more work to software. Of particular note are the VLIW architectures: Intel IA64 (Itanium), the TI TMS320C64x DSPs, and the Transmeta Efficeon are examples. With VLIW, assembly programming gets much more complicated: you have to explicitly select which execution units get which commands and which commands can be done at the same time, something that a modern x86 will do for you (unless it's an Atom). So writing memcpy suddenly gets much, much harder.

These architectural tricks allow you to cut a huge chunk of hardware out of your microprocessors while retaining the performance benefits of a superscalar design. Imagine having a chip with a footprint closer to an Atom but performance closer to a Xeon. I suspect the difficulty of programming these devices are is the major factor impeding wider adoption.

Solution 3

General Purpose vs. Specialized

One factor is that those instructions (rep prefix/string instructions) are general purpose, so they'll handle any alignment, any number of bytes or words and they'll have certain behavior relative to the cache and or state of registers etc. i.e. well defined side effects that can't be changed.

The specialized memory copy may only work for certain alignments, sizes, and may have different behavior vs. the cache.

The hand written assembly (either in the libary or one developers may implement themselves) may outpeform the string instruction implementation for the special cases where it is used. Compilers will often have several memcpy implementations for special cases and then the developer may have a "very special" case where they roll their own.

It doesn't make sense to do this specialization at the hardware level. Too much complexity (= cost).

The law of diminishing returns

Another way to think about it is that when new features are introduced, e.g. SSE, the designers make architectural changes to support these features, e.g. a wider or higher bandwidth memory interface, changes to the pipeline, new execution units, etc. The designer is unlikely at this point to go back to the "legacy" portion of the design to try and bring it up to speed to the latest features. That would kind of be counter-productive. If you follow this philosophy you may ask why do we need SIMD in the first place, can't the designer just make the narrow instructions work as fast as SIMD for those cases where someone uses SIMD? The answer is usually that it's not worth it because it is easier to throw in a new execution unit or instructions.

Solution 4

Once upon a time rep movsb was the optimal solution.

The original IBM PC had an 8088 processor with an 8-bit data bus and no caches. Then the fastest program was generally the one with the fewest number of instruction bytes. Having special instructions helped.

Nowadays, the fastest program is the one that can use as many CPU features as possible in parallel. Strange as it might seem at first, having code with many simple instructions can actually run faster than a single do-it-all instruction.

Intel and AMD keep the old instructions around mainly for backward compatibility.

Solution 5

In embedded systems, it's common to have specialized hardware that does memcpy/memset. It's not normally done as a special CPU instruction, rather it's a DMA peripheral that sits on the memory bus. You write a couple of registers to tell it the addresses, and HW does the rest. It doesn't really warrant a special CPU instruction since it's really just a memory interface issue that doesn't really need to involve the CPU.

Share:
10,181

Related videos on Youtube

Yakov Galka
Author by

Yakov Galka

I'm an experienced C++ programmer and mathematician. Favorite answers: Perspective correct interpolation Why detecting a cycle with DFS is O(V) Soft spherical shadows Correct use of multidraw Some of my programming projects are: Public repository Ductifier — Duct fittings calculator. uniqoda — Input Unicode characters by name. UTF-8 everywhere — It's UTF-8 and it should be everywhere!

Updated on June 17, 2022

Comments

  • Yakov Galka
    Yakov Galka almost 2 years

    When debugging, I frequently stepped into the handwritten assembly implementation of memcpy and memset. These are usually implemented using streaming instructions if available, loop unrolled, alignment optimized, etc... I also recently encountered this 'bug' due to memcpy optimization in glibc.

    The question is: why can't the hardware manufacturers (Intel, AMD) optimize the specific case of

    rep stos
    

    and

    rep movs
    

    to be recognized as such, and do the fastest fill and copy as possible on their own architecture?

    • Billy ONeal
      Billy ONeal over 12 years
      Cop out answer: Because they just don't, and as a result, nobody writes code like that, and therefore there's no reason for them to do so... (cycle continues)
    • Yakov Galka
      Yakov Galka over 12 years
      @BillyONeal: I don't think so. For every new feature they add, there's still no code written that uses it.
    • Billy ONeal
      Billy ONeal over 12 years
      Yes, but when a new feature is added, it gets added to show better performance in one area or another. Optimizing this doesn't make sense for CPU vendors because compilers won't emit code like it.
    • Yakov Galka
      Yakov Galka over 12 years
      @BillyONeal: in fact they do. The intrinsic version of memcpy on MSVC expands to exactly this code.
    • Mike Kwan
      Mike Kwan over 12 years
      CPU manufacturers started making a move towards a more RISC architecture. For that reason, software moves with it. I can't tell you why they made the move in that direction.
    • Guy Sirton
      Guy Sirton over 12 years
      @ybungalobill: MSVC has different intrinsic versions depending on the usage (e.g. is the size a constant multiple of 16 bytes) and the compile options.
    • user1703401
      user1703401 over 12 years
      It doesn't matter much these days, the memory bus bandwidth is the real throttle.
    • Guy Sirton
      Guy Sirton over 12 years
      @ybungalobill: The question title doesn't exactly match what you're asking in the body... Are you asking why are the complicated implementations superior or why can't chip designers make the string instructions work as fast the complicated implementations? Thesre are related but are slightly different questions.
    • Guy Sirton
      Guy Sirton over 12 years
      @HansPassant: I think the streaming loads/stores and prefetch instructions are still useful as they allow better control over the cache. Because of the memory bandwidth being a bottleneck you want to have control over what is in the cache to avoid needing to go out again to memory... So you may want to copy without polluting the cache.
    • Yakov Galka
      Yakov Galka over 12 years
      @GuySirton: the problem here is that when the CPU encounters a rep stos it exactly knows what memory will be affected and what the side effects will be. It can do a 100% correct prediction. SIMD is more general, and thus such prediction can't be done. For example when the memcpy enters the internal loop, the CPU can't know for sure that it will need the block of exactly N bytes starting at P, accessed sequentially once and only once, and nothing more.
    • Peter Cordes
      Peter Cordes over 8 years
      Intel Ivybridge and later do have optimized rep stos and rep movs. Intel calls this "Fast String" support, or ERMSB (Enhanced Rep Movs/StosB). See Intel's optimization manual (linked from stackoverflow.com/tags/x86/info) for more info. This is still a good question, of why previous CPUs have microcoded implementations of rep movs take weren't as fast as an SSE loop doing 16B loads/stores.
  • Yakov Galka
    Yakov Galka over 12 years
    I have the following problems with this answer: "those instructions are general purpose" so is the memcpy in the library. It can work with any alignment, etc... "certain behavior relative to the cache and or state of registers" the semantics of "rep stos" are much simpler than the SIMD based memcpy. You know the side effects even before you executed the command, its just esi += ecx, edi += ecx, ecx = 0. Nothing else is changed, as opposed to the SIMD version where you actually utilize almost everything on the CPU.
  • Yakov Galka
    Yakov Galka over 12 years
    "The specialized memory copy..." I don't talk about the specialized. "The hand written assembly may outpeform the library implementation ..." what? The library implementation is the handwritten.
  • Guy Sirton
    Guy Sirton over 12 years
    @ybungalobill: Some people write their own mem copy functions and then there are specialized library implementations. It wasn't clear to me which ones you were talking about. I'll clarify my answer.
  • Voo
    Voo over 12 years
    Good answer. Or how I'd summarize it: "Don't do things in hardware, if you can do them just as efficiently in software". About the glibc thingy: Torvalds is clearly the practical one here - users don't care why something is not working. And I somehow doubt that memmove has a noticeable performance hit when compared to memcpy these days.. will have to test that.
  • Dietrich Epp
    Dietrich Epp over 12 years
    @ybungalobill: I was simply adding context, since even though you might know the story of glibc memcpy, visitors to the site might not.
  • Guy Sirton
    Guy Sirton over 12 years
    @ybungalobill: my point is that the side effects and legacy requirements may prevent a high speed implementation. esi += ecx and all the logic around it may make the copy slower vs. instructions that don't require this. SIMD units are actually more of a standalone mini-processor, they have their own registers etc. so they don't interace as closely with the state of the rest of the processor. (processor dependant, YMMV, but SSE and NEON are pretty much like that).
  • Dietrich Epp
    Dietrich Epp over 12 years
    @Voo: Note that Adobe fixed the flash plugin and very few people seem to mind these days that memcpy still isn't memmove. I'm not trying to take sides.
  • Krazy Glew
    Krazy Glew over 10 years
    REP MOVS uses a cache protocol feature that is not available to regular code. Basically like SSE streaming stores, but in a manner that is compatible with normal memory ordering rules, etc. // The "large overhead for choosing and setting up the right method" is mainly due to the lack of microcode branch prediction. I have long wished that I had implemented REP MOVS using a hardware state machine rather than microcode, which could have completely eliminated the overhead.
  • Krazy Glew
    Krazy Glew over 10 years
    @GuySirton: I like your point about "(not) going back to the legacy portion of the design". It is very accurate. // It is overall true that a well tuned REP MOVS can beat a well tuned memcopy or memmove - especially if the decision overhead can be reduced. But if only because STA_WC is a feature not available to normal programmers. But... somebody at Intel would have to budget for the effort to tune REP MOVS on every processor generation. Whereas, if you don't retune REP MOVS, then the user has to retune - but later. // i.e. separate memcopy is JIT - don't retune until problem found.
  • Krazy Glew
    Krazy Glew over 10 years
    Tune memcopy today, and your tuned version can be deployed to every existing PC in the market. Tune REP MOVS, and it can only be deployed to future CPUs. (Could use microcode patch, but that is mainly used for security holes. Plus, ucode patch has overhead.)
  • Krazy Glew
    Krazy Glew over 10 years
    By the way, I have long said that one of the things that hardware can do better/faster than software is complex multiway branches.
  • Peter Cordes
    Peter Cordes over 8 years
    @KrazyGlew: I think you're talking about the IvB and later Fast String ops feature, aka ERMSB (Enhanced Rep Movs/StosB). Even then, there's a feature bit you can disable in case you need to run binaries that depend on the ordering of writes within a single rep movs, instead of the now-recommended method of doing a separate store to a synchronization flag after a memset/memcpy.
  • Krazy Glew
    Krazy Glew over 8 years
    @PeterCordes: intel x86 have had "fast strings" since the Pentium Pro (P6) in 1996, which I supervised. The P6 fast strings took REP MOVSB and larger, and implemented them with 64 bit microcode loads and stores and a no-RFO cache protocol. They did not violate memory ordering, unlike ERMSB in iVB.
  • Krazy Glew
    Krazy Glew over 8 years
    Btw: I supervised fast strings on P6, certainly not the entire P6.
  • Krazy Glew
    Krazy Glew over 8 years
    Rewriting a comment that had an auto-correct-typo: @PeterCordes: the big weakness of doing fast strings in microcode was (a) microcode branch mispredictions, and (b) the microcode fell out of tune with every generation, getting slower and slower until somebody got around to fixing it. Just like a library men copy falls out of tune. I suppose that it is possible that one of the missed opportunities was to use 128-bit loads and stores when they became available, and so on.
  • Peter Cordes
    Peter Cordes over 8 years
    @KrazyGlew: tyvm for that insight into what happens under the hood. I hadn't realized pre-IvB string ops avoided RFO. Also interesting to hear that microcoded instructions don't always get tweaked for each CPU generation. I'm not a hardware designer myself, but I casually follow it the way some people follow sports teams. I've seen your name come up on realworldtech.com. (reposting to re-order comments into correct order after your repost. I did eventually deciphered that autocorrect, but it was confusing at first :P)
  • Krazy Glew
    Krazy Glew over 8 years
    In retrospect, I should have written a self-tuning infrastructure, to get reasonably good microcode on every generation. But that would not have helped use new, wider, loads and stores, when they became available. // The Linux kernel seems to have such an autotuning infrastructure, that is run on boot. // Overall, however, I advocate hardware state machines that can smoothly transition between modes, without incurring branch mispredictions. // It is debatable whether good microcode branch prediction would obviate this.
  • supercat
    supercat over 6 years
    @KrazyGlew: It seems strange to me that "rep movs" wouldn't be a constant focus of hardware optimization. Given the amount of other logic on a modern x86 CPU, the amount required to ensure that "rep movs" was never far from being optimal would seem pretty small. If user code wanting a fast memcpy has to lead off with logic to select the optimal approach, it will be difficult for hardware to completely optimize away such tests. If user code can simply use rep movs_ and let hardware worry about the optimal approach, hardware won't have to optimize away such tests.
  • Krazy Glew
    Krazy Glew over 6 years
    @supercat: it is possible that Intel's latest x86 CPUs have made the investment so that REP MOVSx is optimal in all circumstances. But I doubt it.
  • Krazy Glew
    Krazy Glew over 6 years
    @supercat: I think HW state machines are required, possibly at several levels. Ideally, there would be a special bus transaction, "move from A to B", so that data only crosses any particular data bus once - possibly with lookaside caching. I used to have an agenda for block memory operations on a web page - P6 fast strings was just the first few steps.
  • Krazy Glew
    Krazy Glew over 6 years
    If hardware/firmware tunes REP MOVS, then every app on the new processor that uses REP MOVS will benefit. Whereas any software that has tuned memcopy by itself will not benefit from the tuned memcopy. If such software used "SIMD" loads and stores, ideally 512-bit full cache line memops possibly with vector masks, then it may benefit from future hardware optimization to those.
  • Krazy Glew
    Krazy Glew over 6 years
    If you have a machine with binary translation/optimization, the optimizer may be able to recognize old versions if memcopy and translate them into new versions.
  • supercat
    supercat over 6 years
    @KrazyGlew: Even if HW doesn't use the absolutely-optimal sequence of bus operations, I would think it should be possible to keep it close enough to the optimal instruction sequence that hardware would support that it would be reasonable for programmers to use "rep movs" across the board.
  • Krazy Glew
    Krazy Glew over 6 years
    @supercat: I agree. Otherwise I would not have done "fast strings" in the first place. But it was not maintained for a while. That may have changed - one can only hope,
  • Yakov Galka
    Yakov Galka over 4 years
    @KrazyGlew: I wish you could write all that as an answer from somebody who actually worked on it. Other answers here are just speculations.
  • Yakov Galka
    Yakov Galka over 2 years
    @DietrichEpp: you may want to rewrite the answer. On the one hand you say that ARM adds an optimized memcpy, on the other that 'the trend is in the opposite direction'. Itanium is long dead, and @PhiS answer tells that Intel indeed implemented this optimization a long time ago. To me it seems that everybody accepted that CPU support for fast memcpy/memset is a desired feature, and that memcpy implementations were lagging behind the hardware due to the added complexity of choosing the right path based on the runtime architecture.
  • Dietrich Epp
    Dietrich Epp over 2 years
    @YakovGalka: I've already incorporated that into the answer, and explained the reasoning. I generally do rewrites for clarity once an answer has 100 upvotes, as a rule of thumb--if it's reaching that many people, then rewriting it is worth the effort. This answer isn't reaching so many people--only 31 total votes (up and down) over 8 years.