Java very large heap sizes

63,175

Solution 1

If your application is not interactive, and GC pauses are not an issue for you, there shouldn't be any problem for 64-bit Java to handle very large heaps, even in hundreds of GBs. We also haven't noticed any stability issues on either Windows or Linux.

However, when you need to keep GC pauses low, things get really nasty:

  1. Forget the default throughput, stop-the-world GC. It will pause you application for several tens of seconds for moderate heaps (< ~30 GB) and several minutes for large ones (> ~30 GB). And buying faster DIMMs won't help.

  2. The best bet is probably the CMS collector, enabled by -XX:+UseConcMarkSweepGC. The CMS garbage collector stops the application only for the initial marking phase and remarking phases. For very small heaps like < 4 GB this is usually not a problem, but for an application that creates a lot of garbage and a large heap, the remarking phase can take quite a long time - usually much less then full stop-the-world, but still can be a problem for very large heaps.

  3. When the CMS garbage collector is not fast enough to finish operation before the tenured generation fills up, it falls back to standard stop-the-world GC. Expect ~30 or more second long pauses for heaps of size 16 GB. You can try to avoid this keeping the long-lived garbage production rate of you application as low as possible. Note that the higher the number of the cores running your application is, the bigger is getting this problem, because the CMS utilizes only one core. Obviously, beware there is no guarantee the CMS does not fall back to the STW collector. And when it does, it usually happens at the peak loads, and your application is dead for several seconds. You would probably not want to sign an SLA for such a configuration.

  4. Well, there is that new G1 thing. It is theoretically designed to avoid the problems with CMS, but we have tried it and observed that:

    • Its throughput is worse than that of CMS.
    • It theoretically should avoid collecting the popular blocks of memory first, however it soon reaches a state where almost all blocks are "popular", and the assumptions it is based on simply stop working.
    • Finally, the stop-the-world fallback still exists for G1; ask Oracle, when that code is supposed to be run. If they say "never", ask them, why the code is there. So IMHO G1 really doesn't make the huge heap problem of Java go away, it only makes it (arguably) a little smaller.
  5. If you have bucks for a big server with big memory, you have probably also bucks for a good, commercial hardware accelerated, pauseless GC technology, like the one offered by Azul. We have one of their servers with 384 GB RAM and it really works fine - no pauses, 0-lines of stop-the-world code in the GC.

  6. Write the damn part of your application that requires lots of memory in C++, like LinkedIn did with social graph processing. You still won't avoid all the problems by doing this (e.g. heap fragmentation), but it would be definitely easier to keep the pauses low.

Solution 2

I am CEO of Azul Systems so I am obviously biased in my opinion on this topic! :) That being said...

Azul's CTO, Gil Tene, has a nice overview of the problems associated with Garbage Collection and a review of various solutions in his Understanding Java Garbage Collection and What You Can Do about It presentation, and there's additional detail in this article: http://www.infoq.com/articles/azul_gc_in_detail.

Azul's C4 Garbage Collector in our Zing JVM is both parallel and concurrent, and uses the same GC mechanism for both the new and old generations, working concurrently and compacting in both cases. Most importantly, C4 has no stop-the-world fall back. All compaction is performed concurrently with the running application. We have customers running very large (hundreds of GBytes) with worse case GC pause times of <10 msec, and depending on the application often times less than 1-2 msec.

The problem with CMS and G1 is that at some point Java heap memory must be compacted, and both of those garbage collectors stop-the-world/STW (i.e. pause the application) to perform compaction. So while CMS and G1 can push out STW pauses, they don't eliminate them. Azul's C4, however, does completely eliminate STW pauses and that's why Zing has such low GC pauses even for gigantic heap sizes.

Solution 3

We have an application that we allocate 12-16 Gb for but it really only reaches 8-10 during normal operation. We use the Sun JVM (tried IBMs and it was a bit of a disaster but that just might have been ignorance on our part...I have friends that swear by it--that work at IBM). As long as you give your app breathing room, the JVM can handle large heap sizes with not too much GC. Plenty of 'extra' memory is key.
Linux is almost always more stable than Windows and when it is not stable it is a hell of a lot easier to figure out why. Solaris is rock solid as well and you get DTrace too :) With these kind of loads, why on earth would you be using Vista or XP? You are just asking for trouble. We don't do anything fancy with the GC params. We do set the minimum allocation to be equal to the maximum so it is not constantly trying to resize but that is it.

Solution 4

I have used over 60 GB heap sizes on two different applications under Linux and Solaris respectively using 64-bit versions (obviously) of the Sun 1.6 JVM.

I never encountered garbage collection problems with the Linux-based application except when pushing up near the heap size limit. To avoid the thrashing problems inherent to that scenario (too much time spent doing garbage collection), I simply optimized memory usage throughout the program so that peak usage was about 5-10% below a 64 GB heap size limit.

With a different application running under Solaris, however, I encountered significant garbage-collection problems which made it necessary to do a lot of tweaking. This consisted primarily of three steps:

  1. Enabling/forcing use of the parallel garbage collector via the -XX:+UseParallelGC -XX:+UseParallelOldGC JVM options, as well as controlling the number of GC threads used via the -XX:ParallelGCThreads option. See "Java SE 6 HotSpot Virtual Machine Garbage Collection Tuning" for more details.

  2. Extensive and seemingly ridiculous setting of local variables to "null" after they are no longer needed. Most of these were variables that should have been eligible for garbage collection after going out of scope, and they were not memory leak situations since the references were not copied. However, this "hand-holding" strategy to aid garbage collection was inexplicably necessary for some reason for this application under the Solaris platform in question.

  3. Selective use of the System.gc() method call in key code sections after extensive periods of temporary object allocation. I'm aware of the standard caveats against using these calls, and the argument that they should normally be unnecessary, but I found them to be critical in taming garbage collection when running this memory-intensive application.

The three above steps made it feasible to keep this application contained and running productively at around 60 GB heap usage instead of growing out of control up into the 128 GB heap size limit that was in place. The parallel garbage collector in particular was very helpful since major garbage-collection cycles are expensive when there are a lot of objects, i.e., the time required for major garbage collection is a function of the number of objects in the heap.

I cannot comment on other platform-specific issues at this scale, nor have I used non-Sun (Oracle) JVMs.

Solution 5

12Gb should be no problem with a decent JVM implementation such as Sun's Hotspot. I would advice you to use the Concurrent Mark and Sweep colllector ( -XX:+UseConcMarkSweepGC) when using a SUN VM.Otherwies you may face long "stop the world" phases, were all threads are stopped during a GC.

The OS should not make a big difference for the GC performance.

You will need of course a 64 bit OS and a machine with enough physical RAM.

Share:
63,175

Related videos on Youtube

skinnypinny
Author by

skinnypinny

Updated on July 05, 2022

Comments

  • skinnypinny
    skinnypinny almost 2 years

    Does anyone have experience with using very large heaps, 12 GB or higher in Java?

    • Does the GC make the program unusable?
    • What GC params do you use?
    • Which JVM, Sun or BEA would be better suited for this?
    • Which platform, Linux or Windows, performs better under such conditions?
    • In the case of Windows is there any performance difference to be had between 64 bit Vista and XP under such high memory loads?
  • jlintz
    jlintz over 15 years
    because with a heap size that large, you should be looking to reduce the memory footprint as well as optimizing the JVM
  • user2519001
    user2519001 over 15 years
    Or use the 64 bit version of XP. ;)
  • TM.
    TM. over 15 years
    This isn't a limitation of XP, it's a limitation of any 32-bit OS that doesn't use PAE.
  • matbrgz
    matbrgz over 14 years
    What Java version was that, and would you have time to do it again today? The numbers would be very intersting.
  • ShabbyDoo
    ShabbyDoo over 14 years
    I'm not consulting for the same company anymore, so I don't even have the environment to try this out. It was a JDK1.5 JRockit, IIRC.
  • James
    James over 14 years
    It's a limitation of all 32-bit OSs, even those that use PAE.
  • Paolo Dragone
    Paolo Dragone over 14 years
    @james, If you are using a PAE you will see the entire 4GB, if you dont have PAE, then you will not see devices that are mapped to memory(graphics cards etc).
  • Ian Ringrose
    Ian Ringrose about 14 years
    I would not say that Linux was more stable then Windows, however it is very possible that Sun test's it's JVM more on unit and linex then it does on windows.
  • Claes Mogren
    Claes Mogren over 13 years
  • Stephan Eggermont
    Stephan Eggermont over 12 years
    5. Unlikely. 192MB machine is about EUR15K. Azul pricing is enterprise, isn't it?
  • jbellis
    jbellis over 12 years
    This is easily the best summary here. I'd add two things: (1) the CMSInitiatingOccupancyFraction can mitigate the "CMS can't finish before old gen fills up" problem, but (2) unlike the throughput collector, CMS does not compact the heap so fragmentation will usually force STW GC eventually.
  • om-nom-nom
    om-nom-nom over 10 years
    @StephanEggermont you meant 192 GB machine, right?
  • Stephan Eggermont
    Stephan Eggermont over 10 years
    @om-nom-nom yes, that's right. Can't edit comments a day later, unfortunately
  • Chad Wilson
    Chad Wilson over 10 years
    After about 6 emails back and forth with one of your sales folks, I gave up on getting pricing information. A solution you can't even evaluate isn't a solution.
  • Frans
    Frans over 4 years
    Agree. Unless you have a very special kind of application, you should not need 12GB of heap. That normally points to bad coding practices, e.g. loading big things into RAM at once that you should stream instead. Do that right and your application scales well too. Do it wrong and you'll have to keep increasing your heap size as your app gets busier / processes larger volumes of data.