Will malloc implementations return free-ed memory back to the system?

21,083

Solution 1

The following analysis applies only to glibc (based on the ptmalloc2 algorithm). There are certain options that seem helpful to return the freed memory back to the system:

  1. mallopt() (defined in malloc.h) does provide an option to set the trim threshold value using one of the parameter option M_TRIM_THRESHOLD, this indicates the minimum amount of free memory (in bytes) allowed at the top of the data segment. If the amount falls below this threshold, glibc invokes brk() to give back memory to the kernel.

    The default value of M_TRIM_THRESHOLD in Linux is set to 128K, setting a smaller value might save space.

    The same behavior could be achieved by setting trim threshold value in the environment variable MALLOC_TRIM_THRESHOLD_, with no source changes absolutely.

    However, preliminary test programs run using M_TRIM_THRESHOLD has shown that even though the memory allocated by malloc does return to the system, the remaining portion of the actual chunk of memory (the arena) initially requested via brk() tends to be retained.

  2. It is possible to trim the memory arena and give any unused memory back to the system by calling malloc_trim(pad) (defined in malloc.h). This function resizes the data segment, leaving at least pad bytes at the end of it and failing if less than one page worth of bytes can be freed. Segment size is always a multiple of one page, which is 4,096 bytes on i386.

    The implementation of this modified behavior of free() using malloc_trim could be done using the malloc hook functionality. This would not require any source code changes to the core glibc library.

  3. Using madvise() system call inside the free implementation of glibc.

Solution 2

Most implementations don't bother identifying those (relatively rare) cases where entire "blocks" (of whatever size suits the OS) have been freed and could be returned, but there are of course exceptions. For example, and I quote from the wikipedia page, in OpenBSD:

On a call to free, memory is released and unmapped from the process address space using munmap. This system is designed to improve security by taking advantage of the address space layout randomization and gap page features implemented as part of OpenBSD's mmap system call, and to detect use-after-free bugs—as a large memory allocation is completely unmapped after it is freed, further use causes a segmentation fault and termination of the program.

Most systems are not as security-focused as OpenBSD, though.

Knowing this, when I'm coding a long-running system that has a known-to-be-transitory requirement for a large amount of memory, I always try to fork the process: the parent then just waits for results from the child [[typically on a pipe]], the child does the computation (including memory allocation), returns the results [[on said pipe]], then terminates. This way, my long-running process won't be uselessly hogging memory during the long times between occasional "spikes" in its demand for memory. Other alternative strategies include switching to a custom memory allocator for such special requirements (C++ makes it reasonably easy, though languages with virtual machines underneath such as Java and Python typically don't).

Solution 3

I had a similar problem in my app, after some investigation I noticed that for some reason glibc does not return memory to the system when allocated objects are small (in my case less than 120 bytes).
Look at this code:

#include <list>
#include <malloc.h>

template<size_t s> class x{char x[s];};

int main(int argc,char** argv){
    typedef x<100> X;

    std::list<X> lx;
    for(size_t i = 0; i < 500000;++i){
        lx.push_back(X());
    }

    lx.clear();
    malloc_stats();

    return 0;
}

Program output:

Arena 0:
system bytes     =   64069632
in use bytes     =          0
Total (incl. mmap):
system bytes     =   64069632
in use bytes     =          0
max mmap regions =          0
max mmap bytes   =          0

about 64 MB are not return to system. When I changed typedef to: typedef x<110> X; program output looks like this:

Arena 0:
system bytes     =     135168
in use bytes     =          0
Total (incl. mmap):
system bytes     =     135168
in use bytes     =          0
max mmap regions =          0
max mmap bytes   =          0

almost all memory was freed. I also noticed that using malloc_trim(0) in either case released memory to system.
Here is output after adding malloc_trim to the code above:

Arena 0:
system bytes     =       4096
in use bytes     =          0
Total (incl. mmap):
system bytes     =       4096
in use bytes     =          0
max mmap regions =          0
max mmap bytes   =          0

Solution 4

I am dealing with the same problem as the OP. So far, it seems possible with tcmalloc. I found two solutions:

  1. compile your program with tcmalloc linked, then launch it as :

    env TCMALLOC_RELEASE=100 ./my_pthread_soft
    

    the documentation mentions that

    Reasonable rates are in the range [0,10].

    but 10 doesn't seem enough for me (i.e I see no change).

  2. find somewhere in your code where it would be interesting to release all the freed memory, and then add this code:

    #include "google/malloc_extension_c.h" // C include
    #include "google/malloc_extension.h"   // C++ include
    
    /* ... */
    
    MallocExtension_ReleaseFreeMemory();
    

The second solution has been very effective in my case; the first would be great but it isn't very successful, it is complicated to find the right number for example.

Solution 5

Of the ones you list, only Hoard will return memory to the system... but if it can actually do that will depend a lot on your program's allocation behaviour.

Share:
21,083
osgx
Author by

osgx

Linux programmer, interested in compilers (with theory and standard-compliance), cryptography, OS and microelectronics design Working deeply with compilers, standard-compliance and OS libraries.

Updated on July 05, 2022

Comments

  • osgx
    osgx almost 2 years

    I have a long-living application with frequent memory allocation-deallocation. Will any malloc implementation return freed memory back to the system?

    What is, in this respect, the behavior of:

    • ptmalloc 1, 2 (glibc default) or 3
    • dlmalloc
    • tcmalloc (google threaded malloc)
    • solaris 10-11 default malloc and mtmalloc
    • FreeBSD 8 default malloc (jemalloc)
    • Hoard malloc?

    Update

    If I have an application whose memory consumption can be very different in daytime and nighttime (e.g.), can I force any of malloc's to return freed memory to the system?

    Without such return freed memory will be swapped out and in many times, but such memory contains only garbage.

  • osgx
    osgx over 14 years
    Can I use fork in multithreaded app? So I really CANT use fork.
  • osgx
    osgx over 14 years
    Thanks! Can you name other allocators, which can return memory back to the system
  • Andrew McGregor
    Andrew McGregor over 14 years
    Actually, it seems like glibc will as well, but the default threshold is for only allocations 128kB and larger to be made in this way. OpenBSD is mmap-backed for all allocations, and so free will almost always return memory. However, there is a big performance tradeoff; mmap-backed memory is much, much slower in many cases, and will induce a lot of page-faults to zero it, which may be even worse than the small amount of swap pressure it saves.
  • Alex Martelli
    Alex Martelli over 14 years
    yep, but the OpenBSD's motivation is security, not performance (as my answer mentions). Didn't know about glibc, will investigate, tx.
  • dkantowitz
    dkantowitz over 14 years
    As some people below have mentioned, there are special circumstances when malloc implementations will attempt to return memory to the OS. I wouldn't generally rely on this. Instead think about mmap()ing a new segment for your overnight processing and then unmap when you are done. Obviously you will need to do something about heap management, but allocation can be a very simple pool-style allocation (ie. free is a no-op) since you will release the entire memory segment at the end of your overnight processing job.
  • Zan Lynx
    Zan Lynx over 14 years
    @osgx: Yes you can can fork in a multithreaded application as long as you only use it to exec a new process. Well, actually "... the child process may only execute async-signal-safe operations until such time as one of the exec functions is called"
  • osgx
    osgx over 14 years
    Thanks, sounds rather interesting.
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE over 13 years
    @Zan: Where did you get that idea? fork is allowed in multi-threaded processes, and you can do whatever you like as long as you don't corrupt the state of your own synchronization objects by using it. pthread_atfork gives you the tools to avoid doing so.
  • Zan Lynx
    Zan Lynx over 13 years
    @R. POSIX. Where did you get your idea?
  • UncleZeiv
    UncleZeiv over 12 years
    Also checkout M_MMAP_THRESHOLD, which is the threshold above which malloc() uses mmap() to obtain memory. Apparently if you free blocks exceeding this size, you return the memory to the operating system.
  • osgx
    osgx over 12 years
    For tcmalloc 1.7 this page says that it will not return any memory back to the system
  • osgx
    osgx over 12 years
    As I think now, ptmalloc and most ptmalloc-based and dlmalloc will return memory to the system both via munmap and sbrk(-xxxx).
  • maxschlepzig
    maxschlepzig almost 9 years
    malloc_trim() is only available with Linux/glibc.
  • osgx
    osgx over 4 years
    Since 2008 glibc's ptmalloc malloc_trim does iterate over all memory and release fully free aligned 4k pages back to OS with MADV_DONTNEED: stackoverflow.com/a/47061458/196561. This is partly documented in man page man7.org/linux/man-pages/man3/malloc_trim.3.html (since glibc 2.8 or 2.9). Try tcmalloc or jemalloc, they usually return freed memory back to OS better than glibc's ptmalloc
  • ZachB
    ZachB over 3 years
    This is possibly due to this bug with fast bins: sourceware.org/bugzilla/show_bug.cgi?id=14827