My program was "Killed"

13,670

In C++, a float is a single (32 bit) floating point number: http://en.wikipedia.org/wiki/Single-precision_floating-point_format

which means that you are allocating (without overhead) 3 840 000 000 bytes of data.

or roughly 3,57627869 gigabytes..

Lets safely assume that the header of the vector is nothing compared to the data, and continue with this number..

This is a huge amount of data to build up, Linux may assume that this is just a memoryleak, and protect it self by killing the application:

https://unix.stackexchange.com/questions/136291/will-linux-start-killing-my-processes-without-asking-me-if-memory-gets-short

I don't think this is an overcommit problem, since you are actually utillizing nearly half the memory in a single application.

but perhaps.. consider this just for fun.. are you building an 32bit application? you are getting close to the 2^32 (4Gb) memory space that can be addresssed by your program if it's a 32 bit build..

So in case you have another large vector allocated... bum bum bum

Share:
13,670
gsamaras
Author by

gsamaras

Yahoo! Machine Learning and Computer Vision team, San Francisco, California. Masters in Data Science. Received Stackoverflow Swag, Good Samaritan SO swag and "10 years Stackoverflow" Swag x2! In Top 10 users of my country.

Updated on June 04, 2022

Comments

  • gsamaras
    gsamaras almost 2 years

    Probably by the kernel as suggested in this question. I would like to see why I was killed, something like the function the assassination took place. :)

    Moreover, is there anything I can do to allow my program execute normally?


    Chronicle

    My program executes properly. However, we encountered a big dataset, 1.000.000 x 960 floats and my laptop at home couldn't take it (gave an std::bad_alloc()).

    Now, I am in the lab, in a desktop with 9.8 GiB at a processor 3.00GHz × 4, which has more than twice of the memory the laptop at home has.

    At home, the data set could not be loaded in the std::vector, where the data is stored. Here, in the lab, this was accomplished and the program continued with building a data structure.

    That was the last time I heard from it:

    Start building...
    Killed
    

    The desktop in the lab runs on Debian 8. My program runs as expected for a subset of the data set, in particular 1.00.000 x 960 floats.


    EDIT

    strace output is finally available:

    ...
    brk..
    brk(0x352435000)                        = 0x352414000
    mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
    mmap(NULL, 134217728, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = 0x7f09c1563000
    munmap(0x7f09c1563000, 44683264)        = 0
    munmap(0x7f09c8000000, 22425600)        = 0
    mprotect(0x7f09c4000000, 135168, PROT_READ|PROT_WRITE) = 0
    ...
    mprotect(0x7f09c6360000, 8003584, PROT_READ|PROT_WRITE) = 0
    +++ killed by SIGKILL +++
    

    So this tells us I am out of memory, I guess.

  • Basile Starynkevitch
    Basile Starynkevitch about 9 years
    No, that is the wrong approach. BTW, signal(7) forbids to call printf from inside a signal handler.
  • Jose Palma
    Jose Palma about 9 years
    I have added printf because I can't put the code I use in productive xD, use syslog
  • Basile Starynkevitch
    Basile Starynkevitch about 9 years
    Neither printf nor syslog are async-signal-safe-functions so both are forbidden inside a signal handler
  • gsamaras
    gsamaras about 9 years
    I am afraid @BasileStarynkevitch is correct from what we done in uni.
  • Jose Palma
    Jose Palma about 9 years
    Those functions are not forbidden but not recommended, as they can throw a new signal and your code will hang or behave in undefined way. Sometime we need to use the right words :-). In any case, you are probably running out of memory and the kernel is killing your program with a sigkill/sigterm. You can check dmesg or run the program with strace
  • gsamaras
    gsamaras about 9 years
    @Raistmaj I will run strace now. :)
  • gsamaras
    gsamaras about 9 years
    bum³ frightens me. The number of bytes is correct. My laptop at home runs on 32 bits. Do you think that there is some connection? +1 for the nice answer.
  • Henrik
    Henrik about 9 years
    I'm not sure, but it's my first suspicion. Consider testing your program on a smaller dataset first, (like half the size) and if it runs, then I'll put my money that the application is unable to allocate the necessesary memory, and is stopped from wrapping its memory space by the kernel..
  • gsamaras
    gsamaras about 9 years
    I am not sure if you answered my 32 bits question. See my edit for the smaller data set.
  • LawfulEvil
    LawfulEvil about 9 years
    std::vector, unless you reserve the correct amount of space, when you are pushing items into it, will double its memory footprint each time it needs to grow. eg 16 items to 32 items to 64 items to ... so it might be trying to grow well past the size you need. Try to use 'reserve' to get exactly how many items your vector needs.
  • Henrik
    Henrik about 9 years
    I am now quite sure that you simply cannot allocate the memory you need, and the probable explanation is that you are trying to allocate more than you are allowed to with a 32bit application. Regarding a solution, well, LawfulEvil, knows his stuff, - have you reserved space for the data prior to allocation? otherwise that is your solution! - otherwise consider, if its absolute necessesary to hold the entire dataset in memory at once, or perhaps if possible load a subset, do calculations, unload, load the next subset, and so on and so forth..
  • gsamaras
    gsamaras about 9 years
    @LawfulEvil reserve() is not a good choice here, better suggest resize() to get exactly the memory you require. reserve() may waste some space I think. I compiled the code in the lab pc, which is 64bits Henrik. I will check what was suggested.