Tomcat java.lang.OutOfMemoryError: GC overhead limit exceeded
Solution 1
GC Overrhead Limit exceeded
usually implies the application is 'leaking' memory some where. Your heap is high in memory use, like 98/99% and a full GC will retrieve maybe a percent or two. What that will do is it will spend most of the time GCing. The JVM checks to see how often it has spent GCing and if it exceeds some limit this error is what is thrown.
To resolve it you will need to check where the leak is occurring. Do this by getting a heap dump. You can use jmap for that. Once you get it, you should see the % of the heap will probably belong mostly to one set of objects
We tried JAVA_OPTS="-Xms4096m -Xmx8192m. Looks like we still get the error. Please suggest the possible options that we could try..
That's a lot and only delays the inevitable.
Edit from you update
As I expected your OldGen space is at 99% with little reclaimed. OldGen space is where all long lived object will be placed. Since you are not reclaiming some memory all of those objects will eventually be placed into OldGen and you will run out of memory.
What's worth reading are the two lines here:
ParOldGen total 2796224K, used 2796223K [0x0000000700000000, 0x00000007aaab0000, 0x00000007aaab0000) object space 2796224K, 99%
Full GC [PSYoungGen: 374720K->339611K(701120K)] [ParOldGen: 2796223K->2796222K(2796224K)] 3170943K->3135834K(3497344K)
Like I mentioned, OldGen is at 99% and a Full GC only reclaims 1KB YounGen and 35KB OldGen. It will have to GC almost immediately again. It should be GCing GBs at this point.
So
Get a heap dump and find out what the greatest offender here is. Investigate where these objects are being created and why they are not becoming unreachable.
If you have any other questions about how/where or why let me know, but there is nothing else I can tell you at this point.
Solution 2
The detail message "GC overhead limit exceeded" indicates that the garbage collector is running all the time and Java program is making very slow progress.
Can be fixed in 2 ways 1) By Suppressing GC Overhead limit warning in JVM parameter Ex- -Xms1024M -Xmx2048M -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit
-UseGCOverheadLimit - Parameter is used to suppress GCOverHead.
2) By identifying memory leak
Uppi
Updated on October 15, 2020Comments
-
Uppi over 3 years
We trying to migrate our application to Tomcat 7.0 from OC4J. The application works fine with OC4J but, in tomcat the performance gets affected when running load testing with 10 users. We get these errors and the application doesnt respond anymore.
---java.lang.OutOfMemoryError: GC overhead limit exceeded Exception in thread "ajp-bio-8009-exec-231" java.lang.OutOfMemoryError: GC overhead limit exceeded Exception in thread "ContainerBackgroundProcessor[StandardEngine[Catalina]]" java.lang.OutOfMemoryError: GC overhead limit exceeded Exception in thread "ajp-bio-8009-exec-236" java.lang.OutOfMemoryError: GC overhead limit exceeded Exception in thread "ajp-bio-8009-exec-208" java.lang.OutOfMemoryError: GC overhead limit exceeded java.lang.OutOfMemoryError: GC overhead limit exceeded java.lang.OutOfMemoryError: GC overhead limit exceeded Exception in thread "Thread-33" java.lang.OutOfMemoryError: GC overhead limit exceeded Exception in thread "ajp-bio-8009-exec-258" java.lang.OutOfMemoryError: GC overhead limit exceeded java.lang.OutOfMemoryError: GC overhead limit exceeded
We tried
JAVA_OPTS="-Xms4096m -Xmx8192m
. Looks like we still get the error. Please suggest the possible options that we could try..Garbage collection logs
GC invocations=593 (full 539): PSYoungGen total 701120K, used 374720K [0x00000007aaab0000, 0x00000007eaf60000, 0x0000000800000000) eden space 374720K, 100% used [0x00000007aaab0000,0x00000007c18a0000,0x00000007c18a0000) from space 326400K, 0% used [0x00000007d70a0000,0x00000007d70a0000,0x00000007eaf60000) to space 339328K, 0% used [0x00000007c18a0000,0x00000007c18a0000,0x00000007d6400000) ParOldGen total 2796224K, used 2796223K [0x0000000700000000, 0x00000007aaab0000, 0x00000007aaab0000) object space 2796224K, 99% used [0x0000000700000000,0x00000007aaaaffe8,0x00000007aaab0000) PSPermGen total 50688K, used 50628K [0x00000006fae00000, 0x00000006fdf80000, 0x0000000700000000) object space 50688K, 99% used [0x00000006fae00000,0x00000006fdf713a8,0x00000006fdf80000) 4482.450: [Full GC [PSYoungGen: 374720K->339611K(701120K)] [ParOldGen: 2796223K->2796222K(2796224K)] 3170943K->3135834K(3497344K) [PSPermGen: 50628K->50628K(50688K)], 1.4852620 secs]
-
Uppi over 9 yearsI can set the permsize Kevin. I will add the parameter and perform the load tests. Also, do we have any other garbage collection behavior option??
-
jezg1993 over 9 yearsGC Overhead limit isn't the perm generation running out.
-
Uppi over 9 yearsJohn. I ran the trace and I see this for garbage collection Heap before . I modified the question now
-
Uppi over 9 yearsThanks John... I will do it and get back to you
-
Uppi about 9 yearsHi John...I have run the javaheap dump....and I see the top and I see the top consumers are----- org.apache.catalina.loader.StandardClassLoader @ 0x782b5f140 722 35,128 23,037,032 59.15% org.apache.catalina.loader.WebappClassLoader @ 0x78274a710 259 4,384 11,372,480 29.20% <system class loader
-
jezg1993 about 9 yearsThis is while it's lagging excessively?
-
Uppi about 9 yearsNo...when it is normal. I had the dump run on my local machine and I have got these results. Do you want me run heapdump when it is lagging??
-
jezg1993 about 9 years@user3900548 Yes, absolutely. what you have there is the heap when it's in a fine state. This doesnt tell us what is using all the data up
-
jezg1993 about 9 yearsSo run the application under load and capture the heap when it is lagging and throwing the GC overhead limit reached. Instead of seeing the numbers like they are you should see one set own like 85% of the heap (or a large number) which isnt a class loader
-
Uppi about 9 yearsJohn...This is what I found from the load test...One instance of "org.apache.catalina.session.StandardManager" loaded by "org.apache.catalina.loader.StandardClassLoader @ 0x70a82aeb0" occupies 3,167,347,800 (97.19%) bytes. The memory is accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by "<system class loader>". Keywords org.apache.catalina.loader.StandardClassLoader @ 0x70a82aeb0 org.apache.catalina.session.StandardManager java.util.concurrent.ConcurrentHashMap$Segment[]
-
jezg1993 about 9 yearsSo that's it. Somewhere in your app you have a ConcurrentHashMap that is never removing elements from it. 97% of the heap is HUGE. Do you have some caching library in your app? Maybe ehcahce? What you can do here is look at the ConcurrentHashMap and check for its incomming references, where it's being referenced from.
-
jezg1993 about 9 yearsA very naive guess, and this is without any other data so take it with a grain of salt, is that you have an unbounded (or nearly unbounded) cache that never frees up/removes the data. The result is an ever growing heap and the inevitable gc overhead limit reached.
-
jezg1993 about 9 yearsBut, this is what I was expecting, so you'll have to go into the heap now and look at who owns this ConcurrentHashMap and why it continues to grow. Without a good amount more data it's hard for me to tell exactly what the problem is. But you have all the data you need right now in that heap.
-
Uppi about 9 yearsI was looking at the forums and came across this question John...stackoverflow.com/questions/3959122/… like its the same issue......My eclipse memory analyzer is still running and I am waiting for more results
-
Uppi about 9 yearsWe are not using any caching library. For the database, I suppose if there is hibernate, ehcache is used by default !!??
-
jezg1993 about 9 yearsHibernate can use ehcache, but from what I remember you have to set that up directly as a L2 cache. You'll have to do some investigating as to who has a reference to the CHM. I can help if there is more information as to the referent of the CHM.
-
Uppi about 9 yearsOne more question John.....Oc4J was handling the application like champ with less mormory!! But why not Tomcat??
-
jezg1993 about 9 yearsThe only difference I can think of in that case, is if there is something with sessions being prolonged in tomcat that weren't in OC4j. If you observe that the ConcurrentHashMap is in fact an HTTP session store you may be onto something.
-
jezg1993 about 9 yearsAs a proof of concept try invalidating the HTTP session after each individual test case. So, log in -> do test -> log out (session.invaildate())
-
jezg1993 about 9 years@upagna You know what, it may be that, notice the root of the CHM is
org.apache.catalina.session.StandardManager
-
jezg1993 about 9 yearstomcat.apache.org/tomcat-6.0-doc/api/org/apache/catalina/… You may want to see what the
maxActiveSessions
is set to. If its 'unbounded' (ie: -1), limit it to a small number. -
Uppi about 9 yearsHi John...Solved by creating a file store. Its a nice work around as we are planning to revamp the application
-
Uppi about 9 years@ John...It was sessions which consumed all the memory...We ended up in creating a temp file which stores all the sessions by using "org.apache.catalina.session.PersistentManager". Thanks for Your Help. I wouldnt have solved it without you!!
-
jezg1993 about 9 yearsGreat to hear! This type of debugging is a different and fun part of programming, congrats!
-
Balaji Boggaram Ramanarayan almost 8 yearsThere is no permanent generation in Java 8. Its all meta space in eden generation.
-
Alexis Dufrenoy over 7 yearsI get exactly the same problem (java.lang.OutOfMemoryError: GC overhead limit exceeded) while migrating from Weblogic10 / JDK1.6 to Tomcat7 / JDK1.7. Could it be the same cause?
-
jezg1993 over 7 yearsIf the code didnt change I bet it's more likely you simply didn't give enough Old Gen space. Check the difference for
Xmx
between your two deployments.