java.lang.OutOfMemoryError: GC overhead limit exceeded

610,797

Solution 1

You're essentially running out of memory to run the process smoothly. Options that come to mind:

  1. Specify more memory like you mentioned, try something in between like -Xmx512m first
  2. Work with smaller batches of HashMap objects to process at once if possible
  3. If you have a lot of duplicate strings, use String.intern() on them before putting them into the HashMap
  4. Use the HashMap(int initialCapacity, float loadFactor) constructor to tune for your case

Solution 2

The following worked for me. Just add the following snippet:

dexOptions {
        javaMaxHeapSize "4g"
}

To your build.gradle:

android {
    compileSdkVersion 23
    buildToolsVersion '23.0.1'

    defaultConfig {
        applicationId "yourpackage"
        minSdkVersion 14
        targetSdkVersion 23
        versionCode 1
        versionName "1.0"

        multiDexEnabled true
    }

    buildTypes {
        release {
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
        }
    }

    packagingOptions {

    }

    dexOptions {
        javaMaxHeapSize "4g"
    }
}

Solution 3

@takrl: The default setting for this option is:

java -XX:+UseConcMarkSweepGC

which means, this option is not active by default. So when you say you used the option "+XX:UseConcMarkSweepGC" I assume you were using this syntax:

java -XX:+UseConcMarkSweepGC

which means you were explicitly activating this option. For the correct syntax and default settings of Java HotSpot VM Options @ this document

Solution 4

For the record, we had the same problem today. We fixed it by using this option:

-XX:-UseConcMarkSweepGC

Apparently, this modified the strategy used for garbage collection, which made the issue disappear.

Solution 5

Ummm... you'll either need to:

  1. Completely rethink your algorithm & data-structures, such that it doesn't need all these little HashMaps.

  2. Create a facade which allows you page those HashMaps in-and-out of memory as required. A simple LRU-cache might be just the ticket.

  3. Up the memory available to the JVM. If necessary, even purchasing more RAM might be the quickest, CHEAPEST solution, if you have the management of the machine that hosts this beast. Having said that: I'm generally not a fan of the "throw more hardware at it" solutions, especially if an alternative algorithmic solution can be thought up within a reasonable timeframe. If you keep throwing more hardware at every one of these problems you soon run into the law of diminishing returns.

What are you actually trying to do anyway? I suspect there's a better approach to your actual problem.

Share:
610,797
PNS
Author by

PNS

Updated on July 22, 2022

Comments

  • PNS
    PNS almost 2 years

    I am getting this error in a program that creates several (hundreds of thousands) HashMap objects with a few (15-20) text entries each. These Strings have all to be collected (without breaking up into smaller amounts) before being submitted to a database.

    According to Sun, the error happens "if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown.".

    Apparently, one could use the command line to pass arguments to the JVM for

    • Increasing the heap size, via "-Xmx1024m" (or more), or
    • Disabling the error check altogether, via "-XX:-UseGCOverheadLimit".

    The first approach works fine, the second ends up in another java.lang.OutOfMemoryError, this time about the heap.

    So, question: is there any programmatic alternative to this, for the particular use case (i.e., several small HashMap objects)? If I use the HashMap clear() method, for instance, the problem goes away, but so do the data stored in the HashMap! :-)

    The issue is also discussed in a related topic in StackOverflow.