Can the C++ `new` operator ever throw an exception in real life?

44,858

Solution 1

The new operator, and new[] operator should throw std::bad_alloc, but this is not always the case as the behavior can be sometimes overridden.

One can use std::set_new_handler and suddenly something entirely different can happen than throwing std::bad_alloc. Although the standard requires that the user either make memory available, abort, or throw std::bad_alloc. But of course this may not be the case.

Disclaimer: I am not suggesting to do this.

Solution 2

Yes, new can and will throw if allocation fails. This can happen if you run out of memory or you try to allocate a block of memory too large.

You can catch the std::bad_alloc exception and handle it appropriately. Sometimes this makes sense, other times (read: most of the time) it doesn't. If, for example, you were trying to allocate a huge buffer but could work with less space, you could try allocating successively smaller blocks.

Solution 3

If you are running on a typical embedded processor running Linux without virtual memory it is quite likely your process will be terminated by the operating system before new fails if you allocate too much memory.

If you are running your program on a machine with less physical memory than the maximum of virtual memory (2 GB on standard Windows) you will find that once you have allocated an amount of memory approximately equal to the available physical memory, further allocations will succeed but will cause paging to disk. This will bog your program down and you might not actually be able to get to the point of exhausting virtual memory. So you might not get an exception thrown.

If you have more physical memory than the virtual memory, and you simply keep allocating memory, you will get an exception when you have exhausted virtual memory to the point where you can not allocate the block size you are requesting.

If you have a long-running program that allocates and frees in many different block sizes, including small blocks, with a wide variety of lifetimes, the virtual memory may become fragmented to the point where new will be unable to find a large enough block to satisfy a request. Then new will throw an exception. If you happen to have a memory leak that leaks the occasional small block in a random location that will eventually fragment memory to the point where an arbitrarily small block allocation will fail, and an exception will be thrown.

If you have a program error that accidentally passes a huge array size to new[], new will fail and throw an exception. This can happen for example if the array size is actually some sort of random byte pattern, perhaps derived from uninitialized memory or a corrupted communication stream.

All the above is for the default global new. However, you can replace global new and you can provide class-specific new. These too can throw, and the meaning of that situation depends on how you programmed it. it is usual for new to include a loop that attempts all possible avenues for getting the requested memory. It throws when all those are exhausted. What you do then is up to you.

You can catch an exception from new and use the opportunity it provides to document the program state around the time of the exception. You can "dump core". If you have a circular instrumentation buffer allocated at program startup, you can dump it to disk before you terminate the program. The program termination can be graceful, which is an advantage over simply not handling the exception.

I have not personally seen an example where additional memory could be obtained after the exception. One possibility however, is the following: Suppose you have a memory allocator that is highly efficient but not good at reclaiming free space. For example, it might be prone to free space fragmentation, in which free blocks are adjacent but not coalesced. You could use an exception from new, caught in a new_handler, to run a compaction procedure for free space before retrying.

Serious programs should treat memory as a potentially scarce resource, control its allocation as much as possible, monitor its availability and react appropriately if something seems to have gone dramatically wrong. For example, you could make a case that in any real program there is quite a small upper bound on the size parameter passed to the memory allocator, and anything larger than this should cause some kind of error handling, whether or not the request can be satisfied. You could argue that the rate of memory increase of a long-running program should be monitored, and if it can be reasonably predicted that the program will exhaust available memory in the near future, an orderly restart of the process should be begun.

Solution 4

In Unix systems, it's customary to run long-running processes with memory limits (using ulimit) so that it doesn't eat up all of a system's memory. If your program hits that limit, you will get std::bad_alloc.


Update for OP's edit: the most typical case of programs recovering from an out-of-memory condition is in garbage-collected systems, which then performs a GC and continues. Though, this sort of on-demand GC is really for last-ditch efforts only; usually, good programs try to GC periodically to reduce stress on the collector.

It's less usual for non-GC programs to recover from out-of-memory issues, but for Internet-facing servers, one way to recover is to simply reject the request that's causing the memory to run out with a "temporary" error. ("First in, first served" strategy.)

Solution 5

osgx said:

Does any real-world applications checks a lot number of news and can recover when there is no memory?

I have answered this previously in my answer to this question, which is quoted below:

It is very difficult to handle this sort of situation. You may want to return a meaningful error to the user of your application, but if it's a problem caused by lack of memory, you may not even be able to afford the memory to allocate the error message. It's a bit of a catch-22 situation really.

There is a defensive programming technique (sometimes called a memory parachute or rainy day fund) where you allocate a chunk of memory when your application starts. When you then handle the bad_alloc exception, you free this memory up, and use the available memory to close down the application gracefully, including displaying a meaningful error to the user. This is much better than crashing :)

Share:
44,858
osgx
Author by

osgx

Linux programmer, interested in compilers (with theory and standard-compliance), cryptography, OS and microelectronics design Working deeply with compilers, standard-compliance and OS libraries.

Updated on July 09, 2022

Comments

  • osgx
    osgx almost 2 years

    Can the new operator throw an exception in real life?

    And if so, do I have any options for handling such an exception apart from killing my application?

    Update:

    Do any real-world, new-heavy applications check for failure and recover when there is no memory?


    See also:

  • osgx
    osgx over 14 years
    Is this situation REAL? Can I meet it in a real life? Even fclose() can fail, but NOONE is checking its return code. (it will fail on disconected nfs and will not save the information)
  • ojrac
    ojrac over 14 years
    When you write C, do you check to see if malloc returns NULL? If not, I doubt I can convince you to watch for exceptions from new.
  • James McNellis
    James McNellis over 14 years
    You should catch std::bad_alloc anywhere that you can reasonably recover from it. In most cases, there's not a whole lot you can do, and so your best bet might be to catch it in main and at least give the user a nice friendly error message or log the failure (many experts, including Herb Sutter, agree with this: gotw.ca/publications/mill16.htm).
  • Nathan Osman
    Nathan Osman over 14 years
    This is what I call bad practice.
  • Billy ONeal
    Billy ONeal over 14 years
    It can also happen if your dataset just happens to be large. It'd not be unreasonable for a program to not be able to handle a request if you try to pipe in a 20GB file to stdin, for example, in some cases.
  • osgx
    osgx over 14 years
    When it will fail, what can I do??
  • Nathan Osman
    Nathan Osman over 14 years
    I see a lot of code that mistakenly assumes new (without arguments) returns NULL on failure.
  • osgx
    osgx over 14 years
    So must I to check this NULL's every time when I use new?
  • Billy ONeal
    Billy ONeal over 14 years
    @osgx: Only if you use the nothrow option. Did you even read squelart's answer?
  • osgx
    osgx over 14 years
    @ojrac, in C I have a wrapper function of even a macro Malloc, which will test EVERY result of malloc to be not-NULL. And if it is NULL, I have a single point of failure. I will do {perror("my programme");exit(-42);}
  • osgx
    osgx over 14 years
    Yes. I need either to check NULL when using nothrow, either to be able to catch bad_alloc in any place where I use new? There are thousands of such places in big program and it can be very hard.
  • C. K. Young
    C. K. Young over 14 years
    @osgx: In GNU programs, the convention is to call that (malloc successfully or die) function xmalloc.
  • LaSul
    LaSul over 14 years
    I've hit std::bad_alloc before. Yes it's real.
  • osgx
    osgx over 14 years
    linux without ulimit on virt memory, as far as i know, will allow to allocate memory with overcommiting on mallocs/new (mmaps/sbrk internally). But when I'll try to use it, process (sometimes it can be a random process) will be killed by Out-Of-Memory killer without any chance of recovery, or dumping/saving state.
  • osgx
    osgx over 14 years
    piping of 20GB via stdin is a not very hard situation. I had done a lot of greps with such sizes :)
  • osgx
    osgx over 14 years
    @kibibu, Thanks! Was it a huge new() or rather small and typical one? How much memory was allocated before hitting bad_alloc?
  • Jeremy Friesner
    Jeremy Friesner over 14 years
    I think the specifics of the behavior depend entirely on what OS and environment you're executing in.
  • vladr
    vladr over 14 years
    then only keep one catch at the end of main() (and at the end of each thread method if you are multi-threaded) and display a big error message "out of memory" before exiting. :)
  • James McNellis
    James McNellis over 14 years
    There are few legitimate uses of nothrow new. Two that come to mind are when working with legacy code (that assumes new returns null on failure) or when exceptions are prohibited (e.g. in an embedded system).
  • LaSul
    LaSul over 14 years
    @osgx, it was one of several hundred thousand small and typical ones. Part of an acoustic pathtracer that traced first and gathered later. I ran out of memory on my machine (which has a paltry 512 Mb) - but I can't remember whether it exhausted virtual memory or just physical.
  • Potatoswatter
    Potatoswatter over 14 years
    @osgx: Unfortunately, there's no better way to deal with overcommitment. It's more or less defined as suppressing allocation errors, as a feature. Did you try installing a signal handler for that out-of-memory condition?
  • Potatoswatter
    Potatoswatter over 14 years
    I checked Google and it looks like /proc/sys/vm/overcommit_memory might help you turn off overcommitment, if that's what you want.
  • Potatoswatter
    Potatoswatter over 14 years
    std::set_new_handler in <new> is standard C++, §18.4.2.2-3. It's a perfectly reasonable thing to use if you have, for instance, some kind of garbage collection you can do, or you want to log the error. It's not a bad idea to exit the new_handler by throw bad_alloc.
  • Potatoswatter
    Potatoswatter over 14 years
    also - the standard requires that the user's new handler either make memory available, abort, or throw bad_alloc.
  • MSalters
    MSalters over 14 years
    @kibibu: the OS typically won't tell you whether it ran out of physical memory; mostly because you can't ask for it anyway.
  • Brian R. Bondy
    Brian R. Bondy over 14 years
    @Potatoswatter: Cool thanks for the info, updated the answer.
  • paercebal
    paercebal over 14 years
    @osgx: The "bug" was present on Visual C++ 6 (VS98), and on Visual C++ 2003 (but you could set a compiler option to have new behave like the standard wanted it to). It was less a bug than a non-compliant behaviour existing for backward compatibility purposes.
  • Zan Lynx
    Zan Lynx over 14 years
    Yes and you will then get alloc failure exceptions. I ran my old Linux laptop that way for a while and it behaves mostly like having the OOM Killer because not many applications handle it.
  • Mark B
    Mark B over 14 years
    Sometimes in debug builds it's useful to not even catch it in main - then (at least in gcc) you can at least get a core file that may or may not have useful information.
  • Martin York
    Martin York over 14 years
    Also GUI applications. If a user action causes memory exahustion. Then abandon the current action but not the whole application.
  • NTDLS
    NTDLS over 14 years
    OMG! You don't check the return value of fclose()?!
  • osgx
    osgx over 14 years
    how can I specify 50 terabytes? Can application handle this situation? What versions of windows will crash?
  • Logan Capaldo
    Logan Capaldo over 14 years
    @osgx you can achieve the same thing with new with the std::set_new_handler function if you don't like the exception throwing behavior. void new_handler() { perror("my programme"); exit(-42); }; std::set_new_handler(new_handler);.
  • C. K. Young
    C. K. Young over 14 years
    tl;dr (Sorry, had to say it. :-P)
  • Erik Hermansen
    Erik Hermansen over 14 years
    You made me fire up the compiler! My mistake--50 terabytes wouldn't work above. The value is limited to 2^31, about 2 gigs. So try the experiment on a machine with less than 2 gigs of disk space left. I originally ran this on Windows XP. Don't know about other versions of O/S and MSVC runtimes, and it is a really annoying experiment to run.
  • Alex Jasmin
    Alex Jasmin over 14 years
    @Vlad I imagine it still behave this way if you compile your code without exception support
  • karlphillip
    karlphillip almost 14 years
    I've done some embedded systems development and I can safely say that if you DON'T check the success of new/malloc operations, some user somewhere is going to find a way to fill the memory of your device crashing your application if the code lacks proper checking. Not checking the return of functions is BAD BAD practice.
  • Chromozon
    Chromozon about 10 years
    +1 for explaining that, in real-life, new will only throw due to programmer error (operating systems will always give you memory unless you really mess up)
  • mabraham
    mabraham over 9 years
    so what? grep suits streaming. Good luck finding the largest repeated string in a 20GB stdin, though.
  • bit2shift
    bit2shift over 7 years
    From cppreference: "In case of failure, the standard library implementation calls the function pointer returned by std::get_new_handler and repeats allocation attempts until new handler does not return or becomes a null pointer, at which time it throws std::bad_alloc." Pretty much what I'm doing here.
  • aschepler
    aschepler almost 5 years
    @karlphilip Except there's an important difference between malloc and new: malloc can in theory return a null pointer on failure, resulting in bad news if you don't check. But global new never results in a null pointer. If you don't check (and don't do anything to change default handling), the program is guaranteed to terminate on failure.
  • Michel de Ruiter
    Michel de Ruiter over 2 years
    @ErikHermansen disk space? First you'll need to have less memory available.