When should errno be assigned to ENOMEM?

19,591

Solution 1

First, fix your kernel not to overcommit:

echo "2" > /proc/sys/vm/overcommit_memory

Now malloc should behave properly.

Solution 2

It happens when you try to allocate too much memory at once.

#include <stdlib.h>
#include <stdio.h>
#include <errno.h>

int main(int argc, char *argv[])
{
  void *p;

  p = malloc(1024L * 1024 * 1024 * 1024);
  if(p == NULL)
  {
    printf("%d\n", errno);
    perror("malloc");
  }
}

In your case the OOM killer is getting to the process first.

Solution 3

As "R" hinted, the problem is the default behaviour of Linux memory management, which is "overcommiting". This means that the kernel claims to allocate you memory successfuly, but doesn't actually allocate the memory until later when you try to access it. If the kernel finds out that it's allocated too much memory, it kills a process with "the OOM (Out Of Memory) killer" to free up some memory. The way it picks the process to kill is complicated, but if you have just allocated most of the memory in the system, it's probably going to be your process that gets the bullet.

If you think this sounds crazy, some people would agree with you.

To get it to behave as you expect, as R said:

echo "2" > /proc/sys/vm/overcommit_memory

Solution 4

I think errno will be set to ENOMEM:

Macro defined in stdio.h. Here is the documentation.

#define ENOMEM          12      /* Out of Memory */

After you call malloc in this statement:

myblock = (void *) malloc(MEGABYTE);

And the function returns NULL -because system is out of memory -.

I found this SO question very interesting.

Hope it helps!

Share:
19,591
venus.w
Author by

venus.w

Updated on June 08, 2022

Comments

  • venus.w
    venus.w almost 2 years

    The following program is killed by the kernel when the memory is ran out. I would like to know when the global variable should be assigned to "ENOMEM".

    #define MEGABYTE 1024*1024
    #define TRUE 1
    int main(int argc, char *argv[]){
    
        void *myblock = NULL;
        int count = 0;
    
        while(TRUE)
        {
                myblock = (void *) malloc(MEGABYTE);
                if (!myblock) break;
                memset(myblock,1, MEGABYTE);
                printf("Currently allocating %d MB\n",++count);
        }
        exit(0);
    }
    
  • venus.w
    venus.w almost 12 years
    Are there any differences between the two examples essentially?
  • Ignacio Vazquez-Abrams
    Ignacio Vazquez-Abrams almost 12 years
    Yours creeps up on the limit, whereas mine violates it completely.
  • Jens Gustedt
    Jens Gustedt almost 12 years
    +1, this answer is the correct one, although it doesn't explain why :) To give you a bit more information what is happening on modern linux systems, if you don't do what R.. suggests. An allocation then just reserves a range of virtual addresses for the process and doesn't allocate the pages themselves. These are only really claimed from the kernel when you access them for the first time.
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE almost 12 years
    Even with my fix, the kernel doesn't allocate the pages themselves right away. It just accounts for how many will be needed and makes sure never to commit more than can (later) be satisfied.
  • Matt Fletcher
    Matt Fletcher over 9 years
    This managed to just completely break my CentOS box and required a restart :/
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE over 9 years
    @MattFletcher: You probably had a lot of bloated desktop software running with more memory already allocated than could be committed. :/
  • Matt Fletcher
    Matt Fletcher over 9 years
    Nope, pretty clean rackspace server. Just happened to only have 512mb RAM!
  • Sajuuk
    Sajuuk about 5 years
    this is the most disturbing thing I've come to realize in linux kernel, why is the memory allocation designed like this? why not just check availability before allocation?
  • Stanislav Ivanov
    Stanislav Ivanov almost 3 years
    Also look at /proc/sys/vm/overcommit_ratio to understand how many memory can be overcommitted.
  • Hi-Angel
    Hi-Angel over 2 years
    @Sajuuk because this is necessary, I'd rather ask why none of answers mention the perils of setting overcommit_memory to 2. Unless we are talking servers, many simple desktop apps overallocate virtual memory. E.g. some chrome processes has VSS = 20G. Evolution has 99.5G on my system. But the record holder is address sanitizer: even a simple "hello world" built with it is gonna take 20T of virtual memory. Have you got 20T free RAM?