Java using up far more memory than allocated with -Xmx

30,859

Solution 1

Top command reflects the total amount of memory used by the Java application. This includes among other things:

  • A basic memory overhead of the JVM itself
  • the heap space (bounded with -Xmx)
  • The permanent generation space (-XX:MaxPermSize - not standard in all JVMs)
  • threads stack space (-Xss per stack) which may grow significantly depending on the number of threads
  • Space used by native allocations (using ByteBufer class, or JNI)

Solution 2

Max memory = [-Xmx] + [-XX:MaxPermSize] + number_of_threads * [-Xss]

here max heap memory as -Xmx ,min heap memory as -Xms,stack memory as -Xss and -XX maxPermSize

The following example illustrates this situation. I have launched my tomcat with the following startup parameters:

-Xmx168m -Xms168m -XX:PermSize=32m -XX:MaxPermSize=32m -Xss1m

Solution 3

With -Xmx you are configuring heap size. To configure stack size use -Xss parameter. Sum of those two parameters should be approximately what you want:

-Xmx150m -Xss50m

for example.

Additionally there is also -XX:MaxPermSize parameter which controls. This parameter for -client has default value of 32mb and for -server 64mb. According to your configuration calculate it as well. PermGen space is:

The permanent generation is used to hold reflective of the VM itself such as class objects and method objects.

So basically it stores internal data of the JVM, like classes definitions and intern-ed strings.

At the end I must say that there is one part which you can't control, that is memory used by native java process. Java is program, just like any other, so it uses memory also. If you are watching memory usage in Task Manager you will see this memory as well together with your program memory consumption.

Solution 4

It's important to note that "total memory used" (RSS in Linux land) includes JDK heap (+ other JDK areas) as well as any "native memory" allocated.

For instance, these people found that allocating too many jaxbcontexts (which have associated native memory) between GC's could cause it to use a lot of extra RAM. Another common one is apparently ZipInflater if you don't call close on it (or GZipStream, etc.)

http://sleeplessinslc.blogspot.com/2014/08/jvm-native-memory-leak.html

His final workaround/fix was to either GC "more often" (by using GC1 garbage collector, or specifying a smaller [ironically] -Xmx setting) or by cacheing the JaxBContext objects (since they have no close method so you can't control the leak).

Also note that sometimes you can find memory culprits by just examing jstack: http://javaeesupportpatterns.blogspot.com/2011/09/jaxbcontext-performance-problem-case.html

It's also sometimes possible to "miss" closing for instance GZipStreams accidentally http://kohsuke.org/2011/11/03/quiz-time-memory-leak-in-java

Solution 5

Have you tried using JVisualVM?

http://docs.oracle.com/javase/6/docs/technotes/tools/share/jvisualvm.html

I've often found it helps me track this stuff down. It will show you how much of each kind of memory is being used in even let you drill in and find out what.

Share:
30,859
dspyz
Author by

dspyz

Updated on October 11, 2020

Comments

  • dspyz
    dspyz over 3 years

    I have a project I'm writing (in Java) for a class where the prof says we're not allowed to use more than 200m I limit the stack memory to 50m (just to be absolutely sure) with -Xmx50m but according to top, it's still using 300m

    I tried running Eclipse Memory Analyzer and it reports only 26m

    Could this all be memory on the stack?, I'm pretty sure I never go further than about 300 method calls deep (yes, it is a recursive DFS search), so that would have to mean every stack frame is using up almost a megabyte which seems hard to believe.

    The program is single-threaded. Does anyone know any other places in which I might reduce memory usage? Also, how can I check/limit how much memory the stack is using?

    UPDATE: I'm using the following JVM options now with no effect (still about 300m according to top): -Xss104k -Xms40m -Xmx40m -XX:MaxPermSize=1k

    Another UPDATE: Actually, if I let it run a little bit longer (with all these options) about half the time it suddenly drops to 150m after 4 or 5 seconds (the other half it doesn't drop). What makes this really strange is that my program has no stochastic (and as I said it's single-threaded) so there's no reason it should behave differently on different runs

    Could it have something to do with the JVM I'm using?

    java version "1.6.0_27"
    OpenJDK Runtime Environment (IcedTea6 1.12.3) (6b27-1.12.3-0ubuntu1~10.04)
    OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
    

    According to java -h, the default JVM is -server. I tried adding -cacao and now (with all the other options) it's only 59m. So I suppose this solves my problem. Can anyone explain why this was necessary? Also, are there any drawbacks I should know about?

    One more update: cacao is really really slow compared to server. This is an awful option

  • danpaq
    danpaq about 11 years
    Don't forget PermGen space
  • dspyz
    dspyz about 11 years
    Thank you, I just found and tried that. It made no difference. I'm not using up stack memory. Where else could it be coming from if it's not heap or stack
  • dspyz
    dspyz about 11 years
    What's PermGen space and how do I limit it?
  • dspyz
    dspyz about 11 years
    No byte buffers (unless something in the collections framework uses them), limiting PermGen doesn't seem to change anything. I do call System.arraycopy which I believe uses JNI, right? Could that be it?
  • danpaq
    danpaq about 11 years
    Loaded classes go there and I think maybe some statics. You can set it with -XX:MaxPermSize=128M
  • Eyal Schneider
    Eyal Schneider about 11 years
    @dspyz: I don't think arrayCopy uses extra space outside of the heap space. It copies data from one heap location to another. I would re-check the number of threads spawned by your application.
  • dspyz
    dspyz about 11 years
    No, single-threaded. All I see (in eclipse debugger) is Main thread, Signal Dispatcher, Finalizer, and Reference Handler
  • dspyz
    dspyz about 11 years
    Yes, but I've run java programs which easily use less than 300m before. The memory used by the native java process is just constant overhead, isn't it?
  • Eyal Schneider
    Eyal Schneider about 11 years
    @dspyz: What about shared memory of the process? I think you can check it with jmap (jmap <process id>). When you start multiple java processes, their total memory is not the sum as displayed by top, since they use shared libraries.
  • partlov
    partlov about 11 years
    Not constant. That is process which is live. It changes during time. Also there is a lot of difference between different versions. I can see that memory used be Java in 1.6 more than by 1.5.
  • dspyz
    dspyz about 11 years
    When I say "before", I mean a couple weeks ago. Our prof puts the same 200m restriction on all our projects. I've never seen this happen before.
  • Ingo
    Ingo about 11 years
    It also includes the size of shared libraries/dlls, and maybe the space the OS uses to cache loaded class files, etc.
  • dspyz
    dspyz about 11 years
    There doesn't seem to be a version available for linux
  • danpaq
    danpaq about 11 years
    It should be in the bin directory of the JDK (not the JRE); which may not be on your PATH.
  • Ivan Balashov
    Ivan Balashov about 9 years
    Also, GC needs some memory stackoverflow.com/questions/4854599/…
  • deFreitas
    deFreitas almost 8 years
    Permgen will die forget it
  • mrts
    mrts about 6 years
    Note that -Xss sets single thread stack size. The total stack memory usage will be -Xss * number of threads.