Performance difference between IPC shared memory and threads memory

20,007

Solution 1

1) shmat() maps the local process virtual memory to the shared segment. This translation has to be performed for each shared memory address and can represent a significant cost, relative to the number of shm accesses. In a multi-threaded application there is no extra translation required: all VM addresses are converted to physical addresses, as in a regular process that does not access shared memory.

There is no overhead compared to regular memory access aside from the initial cost to set up shared pages - populating the page-table in the process that calls shmat() - in most flavours of Linux that's 1 page (4 or 8 bytes) per 4KB of shared memory.

It's (to all relevant comparison) the same cost whether the pages are allocated shared or within the same process.

2) The shared memory segment must be maintained somehow by the kernel. I do not know what that 'somehow' means in terms of performances, but for example, when all processes attached to the shm are taken down, the shm segment is still up and can be eventually re-accessed by newly started processes. There must be at least some degree of overhead related to the things the kernel needs to check during the lifetime of the shm segment.

Whether shared or not, each page of memory has a "struct page" attached to it, with some data about the page. One of the items is a reference count. When a page is given out to a process [whether it is through "shmat" or some other mechanism], the reference count is incremented. When it is freed through some means, the reference count is decremented. If the decremented count is zero, the page is actually freed - otherwise "nothing more happens to it".

The overhead is basically zero, compared to any other memory allocated. The same mechanism is used for other purposes for pages anyways - say for example you have a page that is also used by the kernel - and your process dies, the kernel needs to know not to free that page until it's been released by the kernel as well as the user-process.

The same thing happens when a "fork" is created. When a process is forked, the entire page-table of the parent process is essentially copied into the child process, and all pages made read-only. Whenever a write happens, a fault is taken by the kernel, which leads to that page being copied - so there are now two copies of that page, and the process doing the writing can modify it's page, without affecting the other process. Once the child (or parent) process dies, of course all pages still owned by BOTH processes [such as the code-space that never gets written, and probably a bunch of common data that never got touched, etc] obviously can't be freed until BOTH processes are "dead". So again, the reference counted pages come in useful here, since we only count down the ref-count on each page, and when the ref-count is zero - that is, when all processes using that page has freed it - the page is actually returned back as a "useful page".

Exactly the same thing happens with shared libraries. If one process uses a shared library, it will be freed when that process ends. But if two, three or 100 processes use the same shared library, the code obviously will have to stay in memory until the page is no longer needed.

So, basically, all pages in the whole kernel are already reference counted. There is very little overhead.

Solution 2

If one considers what is happening at the microelectronics level when two threads or processes are accessing the same memory, there's some interesting consequences.

The point of interest is how the architecture of the CPU allows multiple cores (thus threads and processes) access the same memory. This is done through the L1 caches, then the L2, L3 and finally DRAM. There's an awful lot of coordination has to go on between the controllers of all of that.

For a machine with 2 CPUs or more, that coordination takes place over a serial bus. If one compares the bus traffic that takes place when two cores are accessing the same memory, and when data is being copied to another piece of memory, it's about the same amount of traffic.

So depending on where in a machine the two threads are running, there can be little speed penalty to copying the data vs sharing it.

Copying might be 1) a memcpy, 2) a pipe write, 3) an internal DMA transfer (Intel chips can do this these days).

An internal DMA is interesting because it requires zero CPU time (a naive memcpy is just a loop, actually takes time). So if one can copy data instead of sharing data, and one does this with an internal DMA, you can be just as fast as if you were sharing data.

The penalty is more RAM, but the payback is that things like Actor model programming are in play. This is a way to remove all the complexity of guarding shared memory with semaphores from your program.

Solution 3

Setting up the shared memory requires some extra work by the kernel, so attaching/detaching a shared memory region from your process may be slower than a regular memory allocation (or it may not be... I've never benchmarked that). But, once it's attached to your processes virtual memory map, shared memory is no different than any other memory for accesses, except in the case where you have multiple processors contending for the same cache-line sized chunks. So, in general, shared memory should be just as fast as any other memory for most accesses, but, depending on what you put there, and how many different threads/processes access it, you can get some slowdown for specific usage patterns.

Solution 4

Beside the costs for attaching (shmat) and detaching (shmdt) of the shared memory, access should be equally fast. In other words, it should be fast as the hardware supports it. There should be no overhead in form of an extra layer for each access.

Synchronization should be equally fast, too. For example, in Linux a futex can be used for both processes and threads. Atomic variable should also work fine.

As long as the attaching/detaching costs do not dominate, there should be no disadvantage for using processes. Threads are simpler, however, and if your processes are mostly short-lifed, the attaching/detaching overhead might be an issue. But as the costs to create the processes will be high, anyway, this should not be a likely scenario if you are concerned about performance.

Finally, this discussion might be interesting: Are shmat and shmdt expensive?. (Caveat: It is quite outdated. I don't know if the situation has changed since.)

This related question could also be helpful: What's the difference between shared memory for IPCs and threads' shared memory? (The short answer: Not much.)

Solution 5

The cost of shared memory is proportional to the number of "meta" changes to it: allocation, deallocation, process exit, ...

The number of memory accesses does not play a role. An access to a shared segment is as fast as an access anywhere else.

The CPU performs the page table mapping. Physically, the CPU does not know that the mapping is shared.

If you follow the best-practice (which is to rarely change the mapping) you get basically the same performance as with process-private memory.

Share:
20,007

Related videos on Youtube

Robert Kubrick
Author by

Robert Kubrick

Updated on August 19, 2020

Comments

  • Robert Kubrick
    Robert Kubrick over 3 years

    I hear frequently that accessing a shared memory segment between processes has no performance penalty compared to accessing process memory between threads. In other words, a multi-threaded application will not be faster than a set of processes using shared memory (excluding locking or other synchronization issues).

    But I have my doubts:

    1) shmat() maps the local process virtual memory to the shared segment. This translation has to be performed for each shared memory address and can represent a significant cost. In a multi-threaded application there is no extra translation required: all VM addresses are converted to physical addresses, just like in a regular process that does not access shared memory.

    2) The shared memory segment must be maintained somehow by the kernel. For example, when all processes attached to the shm are taken down, the shm segment is still up and can be eventually re-accessed by newly started processes. There could be some overhead related to kernel operations on the shm segment.

    Is a multi-process shared memory system as fast as a multi-threaded application?

    • wildplasser
      wildplasser over 11 years
      For the kernel, attaching a shared memory segment only involves setting up an (extra) set of pagetables for the underlying memory. (mapping it into the processes address space) There is no additional cost. 2) there is no additional overhead; the checking is done at attach time.
  • Robert Kubrick
    Robert Kubrick over 11 years
    I appreciate the answer, but it's a bit generic. The 2 points I mentioned in the question are still hanging on there...
  • czz
    czz almost 9 years
    TLB cache will be invalidated when switch to other process, so in multi-process architecture, you will be facing more cache miss. Basically it's thread context switch vs process context switch: stackoverflow.com/questions/5440128/…