Very fast memcpy for image processing?

41,257

Solution 1

The SSE-Code posted by hapalibashi is the way to go.

If you need even more performance and don't shy away from the long and winding road of writing a device-driver: All important platforms nowadays have a DMA-controller that is capable of doing a copy-job faster and in parallel to CPU code could do.

That involves writing a driver though. No big OS that I'm aware of exposes this functionality to the user-side because of the security risks.

However, it may be worth it (if you need the performance) since no code on earth could outperform a piece of hardware that is designed to do such a job.

Solution 2

This question is four years old now and I'm a little surprised nobody has mentioned memory bandwidth yet. CPU-Z reports that my machine has PC3-10700 RAM. That the RAM has a peak bandwidth (aka transfer rate, throughput etc) of 10700 MBytes/sec. The CPU in my machine is an i5-2430M CPU, with peak turbo frequency of 3 GHz.

Theoretically, with an infinitely fast CPU and my RAM, memcpy could go at 5300 MBytes/sec, ie half of 10700 because memcpy has to read from and then write to RAM. (edit: As v.oddou pointed out, this is a simplistic approximation).

On the other hand, imagine we had infinitely fast RAM and a realistic CPU, what could we achieve? Let's use my 3 GHz CPU as an example. If it could do a 32-bit read and a 32-bit write each cycle, then it could transfer 3e9 * 4 = 12000 MBytes/sec. This seems easily within reach for a modern CPU. Already, we can see that the code running on the CPU isn't really the bottleneck. This is one of the reasons that modern machines have data caches.

We can measure what the CPU can really do by benchmarking memcpy when we know the data is cached. Doing this accurately is fiddly. I made a simple app that wrote random numbers into an array, memcpy'd them to another array, then checksumed the copied data. I stepped through the code in the debugger to make sure that the clever compiler had not removed the copy. Altering the size of the array alters the cache performance - small arrays fit in the cache, big ones less so. I got the following results:

  • 40 KByte arrays: 16000 MBytes/sec
  • 400 KByte arrays: 11000 MBytes/sec
  • 4000 KByte arrays: 3100 MBytes/sec

Obviously, my CPU can read and write more than 32 bits per cycle, since 16000 is more than the 12000 I calculated theoretically above. This means the CPU is even less of a bottleneck than I already thought. I used Visual Studio 2005, and stepping into the standard memcpy implementation, I can see that it uses the movqda instruction on my machine. I guess this can read and write 64 bits per cycle.

The nice code hapalibashi posted achieves 4200 MBytes/sec on my machine - about 40% faster than the VS 2005 implementation. I guess it is faster because it uses the prefetch instruction to improve cache performance.

In summary, the code running on the CPU isn't the bottleneck and tuning that code will only make small improvements.

Solution 3

At any optimisation level of -O1 or above, GCC will use builtin definitions for functions like memcpy - with the right -march parameter (-march=pentium4 for the set of features you mention) it should generate pretty optimal architecture-specific inline code.

I'd benchmark it and see what comes out.

Solution 4

If specific to Intel processors, you might benefit from IPP. If you know it will run with an Nvidia GPU perhaps you could use CUDA - in both cases it may be better to look wider than optimising memcpy() - they provide opportunities for improving your algorithm at a higher level. They are both however reliant on specific hardware.

Solution 5

If you're on Windows, use the DirectX APIs, which has specific GPU-optimized routines for graphics handling (how fast could it be? Your CPU isn't loaded. Do something else while the GPU munches it).

If you want to be OS agnostic, try OpenGL.

Do not fiddle with assembler, because it is all too likely that you'll fail miserably to outperform 10 year+ proficient library-making software engineers.

Share:
41,257
horseyguy
Author by

horseyguy

C++ programmer.

Updated on November 19, 2020

Comments

  • horseyguy
    horseyguy over 3 years

    I am doing image processing in C that requires copying large chunks of data around memory - the source and destination never overlap.

    What is the absolute fastest way to do this on the x86 platform using GCC (where SSE, SSE2 but NOT SSE3 are available)?

    I expect the solution will either be in assembly or using GCC intrinsics?

    I found the following link but have no idea whether it's the best way to go about it (the author also says it has a few bugs): http://coding.derkeiler.com/Archive/Assembler/comp.lang.asm.x86/2006-02/msg00123.html

    EDIT: note that a copy is necessary, I cannot get around having to copy the data (I could explain why but I'll spare you the explanation :))

  • horseyguy
    horseyguy over 14 years
    i need it to be performed in MEMORY, that is, it cannot happen on the GPU. :) Also, i don't intend, myself, to outperform the library functions (hence why i ask the question here) but i'm sure there is somebody on stackoverflow who can outperform the libs :) Further, library writers are typically restricted by portability requirements - as i stated I only care about the x86 platform, so perhaps further x86 specific optimizations are possible.
  • peterchen
    peterchen over 14 years
    +1 since it's good first advice to be given - even though it does not apply in banister's case.
  • Andrew Bainbridge
    Andrew Bainbridge over 10 years
    I've just posted an answer that talks about the bandwidth of RAM. If what I say is true, then I don't think the DMA engine could achieve much beyond what the CPU can achieve. Have I missed something?
  • Andrew Bainbridge
    Andrew Bainbridge over 10 years
    I'm not sure it is good advice. A typical modern machine has about the same memory bandwidth for the CPU and GPU. For example, the many popular laptops use Intel HD graphics, which uses the same RAM as the CPU. The CPU can already saturate the memory bus. For memcpy, I'd expect similar performance on the CPU or GPU.
  • v.oddou
    v.oddou over 10 years
    Your thinking process is good. However you lack to think about marketing numbers of RAM, this is all quad pumped figures, which doesn't corresponds to the speed of 1 channel. And it is also the speed before bus, there are management overheads also in the numa model that core i7/opterons have.
  • Peter Cordes
    Peter Cordes over 3 years
    Can you point out any specific DMA engine that might be found in a modern x86 system that can copy memory faster than a CPU core can using SSE or AVX? PCIe 3.0 with an x16 link is only capable of 15.75 GB/s, vs. dual-channel DDR4 2133 MT/s (e.g. a Skylake CPU from 2015) giving a theoretical bandwidth of 34GB/s. So any such DMA engine would need to be attached to the CPU more closely than that. Note that the memory controllers are built-in to the CPU, so any off-chip DMA engine has to get to memory via the CPU, on modern x86.
  • Peter Cordes
    Peter Cordes over 3 years
    A single core of an Intel desktop/laptop chip can come close to saturating DRAM bandwidth (unlike a many-core Xeon). Why is Skylake so much better than Broadwell-E for single-threaded memory throughput? / Enhanced REP MOVSB for memcpy