How to avoid running out of memory in high memory usage application? C / C++

11,430

Solution 1

First, on a 32-bit system, you will always be limited to 4 GB of memory, no matter pagefile settings. (And of those, only 2GB will be available to your process on Windows. On Linux, you'll typically have around 3GB available)

So the first obvious solution is to switch to a 64-bit OS, and compile your application for 64-bit. That gives you a huge virtual memory space to use, and the OS will swap data in and out of the pagefile as necessary to keep things working.

Second, allocating smaller chunks of memory at a time may help. It's often easier to find 4 256MB chunks of free memory than one 1GB chunk.

Third, split up the problem. Don't process the entire dataset at once, but try to load and process only a small section at a time.

Solution 2

Have you checked to ensure you aren't leaking memory anywhere?

Since your program is portable to Linux, I suggest running it under Valgrind to make sure.

Solution 3

It sounds like you're already doing a SAX based approach to the XML processing (loading the XML as you go instead of all at once).

The solution is almost always to change the algorithm so that it cuts the problem into smaller parts. Physically don't allocate as much memory at one time, read in only what you need, process it, then write it out.

You can sometimes extend memory via using the hard drive instead when needed in your algorithm.

If you can't split up your algorithm, you probably want something like memory mapped files.

In the worst case you can try to use something like VirtualAlloc if you are on a windows system. If you are on a 32-bit system you can try to use something like Physical Address Extension (PAE).

You could also consider putting input limitations for your program, and having a different one for 32-bit and 64-bit systems.

Solution 4

I suspect your memory issues are from keeping the BSP tree in memory. So keep the BSP on disk and only keep some chunks in memory. This should be fairly easy with BSP, as the structure lends itself more than some other tree structures, and the logic should be simple. To be both efficient and memory friendly you could have a cache w/ dirty flag, with the cache size set to available memory less a bit for breathing room.

Solution 5

Assuming you are using Windows XP, if you are only just over your memory limit and do not desire or have the time to rework the code as suggested above, you can add the /3GB switch to your boot.ini file and then it just a matter of setting a linker switch to get an extra 1GB of memory.

Share:
11,430

Related videos on Youtube

KPexEA
Author by

KPexEA

Video game programmer, mainly c, c++ First programmed on the Commodore Pet back in 1981 Last project Zombie Apocalypse: Never die alone Some of my memorable projects include: Test Drive (Accolade, c64) Stunts (Br0derbund, pc) Fifa International Soccer (EA, Sega Genesis) Platforms I have worked on: Commodore Pet, Vic 20, C64, Apple ][, Atari 800XL, PC, Linux, Sega Genesis, Sega CD, Sega 32x, Nintendo SNES, N64, PlayStation, PS2, PS3, Xbox, X360, STM32-Primer2, GP2X-Wiz, Chrome Native-Client

Updated on November 24, 2020

Comments

  • KPexEA
    KPexEA over 3 years

    I have written a converter that takes openstreetmap xml files and converts them to a binary runtime rendering format that is typically about 10% of the original size. Input file sizes are typically 3gb and larger. The input files are not loaded into memory all at once, but streamed as points and polys are collected, then a bsp is run on them and the file is output. Recently on larger files it runs out of memory and dies (the one in question has 14million points and 1million polygons). Typically my program is using about 1gb to 1.2 gb of ram when this happens. I've tried increasing virtual memory from 2 to 8gb (on XP) but this change made no effect. Also, since this code is open-source I would like to have it work regardless of the available ram (albeit slower), it runs on Windows, Linux and Mac.

    What techniques can I use to avoid having it run out of memory? Processing the data in smaller sub-sets and then merging the final results? Using my own virtual memory type of handler? Any other ideas?

  • Shay Erlichmen
    Shay Erlichmen about 15 years
    Windows can have 3GB of virtual space with /LARGEADDRESSAWARE
  • josesuero
    josesuero about 15 years
    The flag allows the process to use up to 4GB if the OS can provide it. Normally, Windows is still set up to only give 2GB to each process. That can be changed too, at the risk of driver instability, to give you 3GB, yes. With PAE, you can get even more. But 64-bit is probably a better bet.
  • KPexEA
    KPexEA about 15 years
    Yes I have checked for leaks, there are none.
  • xtofl
    xtofl about 15 years
    Imho, the third option is the most important one. Apart from giving control over memory, it also allows for parallel processing.
  • Štěpán Němejc
    Štěpán Němejc about 15 years
    It's not that simple to use 3GB. You should make sure all your pointer aritmetic operations are safe, or else you'll crash when memory usage gets high. See blogs.msdn.com/oldnewthing/archive/2004/08/12/213468.aspx for more.
  • Trevor Boyd Smith
    Trevor Boyd Smith about 15 years
    RE - Jalf "With PAE": As far as I know, Windows XP doesn't really support the 4 extra address bits that most hardware comes with ( see PAE ). I get this from wikipedia's PAE page which has a link to an Microsoft webpage . Can anyone confirm from experience that winXP ignores the extra address bits?
  • josesuero
    josesuero about 15 years
    @Trevor: I haven't tried, but I'm pretty sure XP supports it (although of course you have to manually enable it). But does it matter? These days, switching to 64-bit is likely a far better solution.
  • KPexEA
    KPexEA about 15 years
    I am allocating points and polys using a heap manager so they only take as much space has necessary and have almost no overheard since my heap allocated (in this case) 1mb chunks and doles out requests from each chunk
  • Nitin Bhide
    Nitin Bhide about 15 years
    Another reason could be 'memory fragmentation' (i.e. memory is available in small chunks but when you ask for '1 Mb', contiguous 1 MB chunk is not available. Does the XML parser uses 'heap manager'? May be XML parser is using standard memory allocation and causing fragmentation ?

Related