How and why an allocation memory can fail?

14,817

Solution 1

Although you've gotten a number of answers about why/how memory could fail, most of them are sort of ignoring reality.

In reality, on real systems, most of these arguments don't describe how things really work. Although they're right from the viewpoint that these are reasons an attempted memory allocation could fail, they're mostly wrong from the viewpoint of describing how things are typically going to work in reality.

Just for example, in Linux, if you try to allocate more memory than the system has available, your allocation will not fail (i.e., you won't get a null pointer or a strd::bad_alloc exception). Instead, the system will "over commit", so you get what appears to be a valid pointer -- but when/if you attempt to use all that memory, you'll get an exception, and/or the OOM Killer will run, trying to free memory by killing processes that use a lot of memory. Unfortunately, this may about as easily kill the program making the request as other programs (in fact, many of the examples given that attempt to cause allocation failure by just repeatedly allocating big chunks of memory should probably be among the first to be killed).

Windows works a little closer to how the C and C++ standards envision things (but only a little). Windows is typically configured to expand the swap file if necessary to meet a memory allocation request. This means that what as you allocate more memory, the system will go semi-crazy with swapping memory around, creating bigger and bigger swap files to meet your request.

That will eventually fail, but on a system with lots of drive space, it might run for hours (most of it madly shuffling data around on the disk) before that happens. At least on a typical client machine where the user is actually...well, using the computer, he'll notice that everything has dragged to a grinding halt, and do something to stop it well before the allocation fails.

So, to get a memory allocation that truly fails, you're typically looking for something other than a typical desktop machine. A few examples include a server that runs unattended for weeks at a time, and is so lightly loaded that nobody notices that it's thrashing the disk for, say, 12 hours straight, or a machine running MS-DOS or some RTOS that doesn't supply virtual memory.

Bottom line: you're basically right, and they're basically wrong. While it's certainly true that if you allocate more memory than the machine supports, that something's got to give, it's generally not true that the failure will necessarily happen in the way prescribed by the C++ standard -- and, in fact, for typical desktop machines that's more the exception (pardon the pun) than the rule.

Solution 2

Apart from the obvious "out of memory", memory fragmentation can also cause this. Imagine a program that does the following:

  • until main memory is almost full:
    • allocate 1020 bytes
    • allocate 4 bytes
  • free all the 1020 byte blocks

If the memory manager puts all these sequentially in memory in the order they are allocated, we now have plenty of free memory, but any allocation larger than 1020 bytes will not be able to find a contiguous space to put them, and fail.

Solution 3

Usually on modern machines it will fail due to scarcity of virtual address space; if you have a 32 bit process that tries to allocate more than 2/3 GB of memory1, even if there would be physical RAM (or paging file) to satisfy the allocation, simply there won't be space in the virtual address space to map such newly allocated memory.

Another (similar) situation happens when the virtual address space is heavily fragmented, and thus the allocation fails because there's not enough contiguous addresses for it.

Also, running out of memory can happen, and in fact I got in such a situation last week; but several operating systems (notably Linux) in this case don't return NULL: Linux will happily give you a pointer to an area of memory that isn't already committed, and actually allocate it when the program tries to write in it; if at that moment there's not enough memory, the kernel will try to kill some memory-hogging processes to free memory (an exception to this behavior seems to be when you try to allocate more than the whole capacity of the RAM and of the swap partition - in such a case you get a NULL upfront).

Another cause of getting NULL from a malloc may be due to limits enforced by the OS over the process; for example, trying to run this code

#include <cstdlib>
#include <iostream>
#include <limits>

void mallocbsearch(std::size_t lower, std::size_t upper)
{
    std::cout<<"["<<lower<<", "<<upper<<"]\n";
    if(upper-lower<=1)
    {
        std::cout<<"Found! "<<lower<<"\n";
        return;
    }
    std::size_t mid=lower+(upper-lower)/2;
    void *ptr=std::malloc(mid);
    if(ptr)
    {
        free(ptr);
        mallocbsearch(mid, upper);
    }
    else
        mallocbsearch(lower, mid);
}

int main()
{
    mallocbsearch(0, std::numeric_limits<std::size_t>::max());
    return 0;
}

on Ideone you find that the maximum allocation size is about 530 MB, which is probably a limit enforced by setrlimit (similar mechanisms exist on Windows).


  1. it varies between OSes and can often be configured; the total virtual address space of a 32 bit process is 4 GB, but on all the current mainstream OSes a big chunk of it (the upper 2 GB by for 32 bit Windows with default settings) is reserved for kernel data.

Solution 4

The amount of memory available to the given process is finite. If the process exhausts its memory, and tries to allocate more, the allocation would fail.

There are other reasons why an allocation could fail. For example, the heap could get fragmented and not have a single free block large enough to satisfy the allocation request.

Share:
14,817
Florian Richoux
Author by

Florian Richoux

Senior researcher in AI at AIST, Tokyo. I have got my Ph.D. in Theoretical Computer Science. Programming mostly in C++, GNU/Linux user since 2003. My GitHub: https://github.com/richoux

Updated on July 22, 2022

Comments

  • Florian Richoux
    Florian Richoux almost 2 years

    This was an question I asked myself when I was a student, but failing to get a satisfying answer, I got it little by little out my mind... till today.

    I known I can deal with an allocation memory error either by checking if the returned pointer is NULL or by handling the bad_alloc exception.

    Ok, but I wonder: How and why the call of new can fail? Up to my knowledge, an allocation memory can fail if there is not enough space in the free store. But does this situation really occur nowadays, with several GB of RAM (at least on a regular computer; I am not talking about embedded systems)? Can we have other situations where an allocation memory failure may occur?

  • Manu343726
    Manu343726 over 10 years
    Here is the documentation about windows processes virtual space
  • Admin
    Admin over 10 years
    Also, no sane OS will give a single process all of the - say - 16GB RAM even if physically available and addressable (on 64 bit).
  • Matteo Italia
    Matteo Italia over 10 years
    @H2CO3: I don't see why not... the OS will keep for itself enough memory to work, but should give all the rest to user processes that need it - this is especially true for applications that need lots of memory running on a PC that is dedicated just for them (e.g. DB servers). Granted, on a Linux box a process that takes almost all the memory will be the first candidate for killing when an OOM condition happens, but if there's enough memory it should run just fine.
  • Admin
    Admin over 10 years
    @MatteoItalia well, democracy. No process is being favored over another one (well, not counting priority settings and things like that, anyway). Fair enough, servers with high memory requirements will get a lot of memory on demand, but AFAIK they don't eat up all the RAM either (or if so, there are serious problems).
  • SigTerm
    SigTerm over 10 years
    @H2CO3: Os can do that (and process might be able to allocate larger amount of RAM that is physically available), but in reality portion of that memory will be offloaded into swap file. Swapping in computer games happened exactly for that reason - it was was a fairly common on 32bit systems before people could install 2GB of ram (I mean, when all machines ahd 128..256..512 MB of RAM total).