Tomcat garbage collecting frequency

13,531

Solution 1

What version of Tomcat are you using?

There's a bug in the memory leak prevention gubbins that causes full GCs on the hour:

https://issues.apache.org/bugzilla/show_bug.cgi?id=53267

(note that bug mentions its on tomcat7 but 6 is affected too, fixed in 6.0.36)

Solution 2

Garbage collection is a trade-off between memory allocation and performance. Imagine your attic. If you keep throwing things in it, eventually it will get full and you'll have to clean it out. If you have a small attic, you might fill it once a month, but it will only take you 20 minutes to tidy. If you've got a really big attic, it might take you a year to fill, but a weekend to clean up.

The same is true of the JVM. If you allocate 6 GB, but you could really get by with 1.5 GB, then you will have much less frequent garbage collections, but when they do happen, it could stop the world for over a minute.

How much memory is being freed by the scavenge garbage collections?

If the scavenges (copying from Eden to Survivor) are recovering most of the memory, then you have lots of very short-lived objects. If you increase the size of the New Generation, then these garbage collections will become less frequent, but they will take longer. Ideally you want these to be as quick as possible, which means keeping this space small enough that it can sweep quickly, but big enough that short-lived objects don't get promoted to the tenured generation. I wouldn't tinker with the size of the New Generation unless I were sure that short-life (i.e. request) objects are being tenured when they shouldn't be.

How much memory is freed by your full garbage collection?

If it's taking you a couple of days right now to fill 6 GB, given that the heap accumulates objects from every user session and request you've ever had since the last full GC, I'd suspect that you're freeing a good chunk of your heap with the full GC. If you're not, then you should probably investigate whether you have a memory leak (perhaps a rogue cache). If you are freeing most of the heap with the full GC, you should investigate whether a smaller heap size would make sense.

A smaller heap will increase the frequency of full garbage collections, but would make the pauses significantly shorter. In a web application, long stop-the-world pauses for full garbage collections are unacceptable if they happen during a peak period.

If, as you're indicating, the Java heap is potentially being swapped by the OS, you should definitely look at shrinking it. But I wouldn't tamper with the New Generation sizing.

Solution 3

From my experience, I think that statements

"servers are doing full GC's less that once a day"

and

"there are indications elsewhere in the OS that we might have memory pressures"

are contradicting to each other. Generally the frequency of GC increases with memory churn, and the less frequently this happens, the better.

Remember, full GC means that all the application activity is stopped - sometimes this can take minutes, especially for large heaps!

For example, here I have a number of live running systems with ~1.4K HTTP sessions each and each box (-Xmx=13g) does around 1-2 full GCs per hour. Ideally we would like it to be even less frequent, but due to the nature of the app we can't.

You need to clarify what is the genuine problem you are trying to solve, because from the information you given I don't see anything that requires tuning.

Share:
13,531
Mark
Author by

Mark

Updated on June 04, 2022

Comments

  • Mark
    Mark over 1 year

    I'm new to Java, and have just inherited a Tomcat setup so I'd like some guidance :) I've read more in the last week about JVM tuning and garbage collecting algorithms than I would like to!

    Using Visual VM/GC our Tomcat servers are doing full GC's less that once a day. Given that most users web sessions last less than an hr, this to mee seems very infrequent, and presumably there are a lot of "dead" objects in perm gen for a long time? So does this just mean that we have plenty of RAM/heap space, and it simply doesn't need to collect so it doesn't?

    Given this, would it be better to make the old gen smaller and the new gen bigger, as the rate of promotion is very small?

    I'm asking, because there are indications elsewhere in the OS that we might have memory pressures, but the JVM/GC logs seems to contradict the OS.

    Related to this -

    We currently have min-heap=max-heap=6Gb. If top is showing a java process size of 7-8Gb, but a RSS of 5-6Gb, presumably this means that 2Gb is swapped out? In which case it's doing to die when it full GCs. So would it be better to have a smaller min-heap size that GCs more often before the OS swaps it out.

    Is it generally best to leave the JVM to tune itself rather than getting obesesed with settings all the parameters manually or do most people set the params manually?

  • Mark
    Mark almost 12 years
    It was kindof a general question to get an idea of what would be "normal" frequency of GCs and if my understanding of the perm/eden space was correct. We're using the concurrent GC to try and avoid stop-the-world events. Over time free memory on the servers (as reported by top) goes low (<500Mb) and then the server slows to a crawl. There is currently a proposal that we add more memory to our servers to fix this. To me, it's more indicative of a memory leak somewhere so we don't need more RAM. I was curious if having such low rates of full GCs meant that Java had plenty of RAM already
  • mindas
    mindas almost 12 years
    Ah, I was assuming you're using the old GC, not the new G1 GC. Could've mentioned that.
  • Matt
    Matt almost 12 years
    cost of collecting eden generally scales with the size of the live set not the size of eden so the advice to keep eden small is misleading. One key thing, as you've alluded to, is to ensure the survivor spaces are appropriately sized and maxtenuringthreshold is set correctly so that inappropriate tenuring doesn't happen
  • Jonathan
    Jonathan almost 12 years
    Ah cool... yeah, I guess that makes sense re: the live set. Thanks for clarifying.
  • Mark
    Mark almost 12 years
    Cool thanks! Have realised (using pmap on the java process) that the reason for our low mem situation is that occasionally the Java process but not the Java heap spikes from about 200Mb to 3Gb! Need to find out what is causing this now. Think I will shrink the heap max size as well so it GCs more often, but with less impact each time.