Java `OutOfMemoryError` when creating < 100 threads

24,811

Solution 1

You can be limit by max user processes, to know your limit use :

ulimit -u

To change the limit :

In /etc/security/limits.conf set :

user soft nproc [your_val] 
user hard nproc [your_val]

You may have to add some other config if it's not enough see this link.

Note : The OP found this bug report in fedora and centos which explains the limitations of editing /etc/security/limits.conf.

Solution 2

Your problem is probably related with JVM being unable to allocate stack memory for new threads. Ironically, this problem can be solved by decreasing heap space (-Xmx) and stack space (-Xss). Check here, for instance, for a good explanation: http://www.blogsoncloud.com/jsp/techSols/java-lang-OutOfMemoryError-unable-to-create-new-native-thread.jsp

Solution 3

It's not missing memory for your new threads, it's missing actual threads. The system is probably stopping you: there's a limit to the number of thread a user can create. You can query it that way:

cat /proc/sys/kernel/threads-max

Note that you might be impacted by other processes on the same machine, you they create many thread too. You might find this question useful: Maximum number of threads per process in Linux?

Solution 4

Just for clarification:

You provide a ServerSocket to the Thread. Do you send Data to that Socket? Maybe you store to much data within the Thread-Context. Tak a look for a pattern, where you store Streamdata in an byte[].

Share:
24,811
kaz
Author by

kaz

Updated on July 05, 2022

Comments

  • kaz
    kaz almost 2 years

    I've been reading and testing and banging my head on the wall for over a day because of this error.

    I have some Java code in a class called Listener that looks like this

    ExecutorService executor = Executors.newFixedThreadPool(NTHREADS);
    boolean listening = true;
    int count = 0;
    while (listening) {
        Runnable worker;
        try {
            worker = new ServerThread(serverSocket.accept()); // this is line 254
            executor.execute(worker);
            count++;
            logger.info("{} threads started", count);
        } catch (Exception e1){
            //...
        }
    }
    

    I have been tweaking the JVM settings -Xmx (anywhere from 1 to 15G) and -Xss (anywhere from 104k to 512M). The server has 24 GB of RAM, but must also run the database that supports the program.

    After 2-20 threads are created (a few dozen exist elsewhere in the program as well), I get the error

    Exception in thread "Thread-0" java.lang.OutOfMemoryError: unable to create new native thread
    at java.lang.Thread.start0(Native Method)
    at java.lang.Thread.start(Thread.java:657)
    at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:943)
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1325)
    at xxx.Listener.run(Listener.java:254)
    

    $java -version yields:

    java version "1.6.0_24"
    OpenJDK Runtime Environment (IcedTea6 1.11.1) (fedora-65.1.11.1.fc16-x86_64)
    OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
    

    There is always a large amount of free memory on the system when this happens, and other programs continue to execute fine. What is causing Java to think it has no more memory for new threads?

    UPDATE: Perhaps this is bigger than I thought- I managed to get this error (only one time) when I used ^C:

    OpenJDK 64-Bit Server VM warning: Exception java.lang.OutOfMemoryError occurred dispatching signal SIGINT to handler- the VM may need to be forcibly terminated
    

    and the same happened when I tried to kill the client (also written in Java and running on the same server, it is a single thread that reads a file and sends it to the server over the socket), so there is definitely a limit beyond the JVM causing one to interfere with the other, but I can't imagine what if I still have free memory and am not using swap at all? Server -Xmx1G -Xss104k Client -Xmx10M

    UPDATE2: Abandoning the perl Forks::Super library and running the clients from bash let me get up to 34 threads before the server crashed with OOME, so running multiple clients definitely had an impact on the server, but at the same time I should still be able to run more than 34 (68 if one counts the clients) java threads at a time. Which system resources are blocking the creation of more threads (i.e. where should I look to find the hog)? When everything (clients, server, GC...) runs out of memory at the same time, top says this about my CPU and memory usage:

    Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
    Mem:  24681040k total,  1029420k used, 23651620k free,    30648k buffers
    Swap: 26836988k total,        0k used, 26836988k free,   453620k cached
    

    UPDATE3: Does the hs_error log below indicate that my java is not 64 bit?

    # There is insufficient memory for the Java Runtime Environment to continue.
    # Cannot create GC thread. Out of system resources.
    # Possible reasons:
    #   The system is out of physical RAM or swap space
    #   In 32 bit mode, the process size limit was hit
    # Possible solutions:
    #   Reduce memory load on the system
    #   Increase physical memory or swap space
    #   Check if swap backing store is full
    #   Use 64 bit Java on a 64 bit OS
    #   Decrease Java heap size (-Xmx/-Xms)
    #   Decrease number of Java threads
    #   Decrease Java thread stack sizes (-Xss)
    #   Set larger code cache with -XX:ReservedCodeCacheSize=
    # This output file may be truncated or incomplete.
    #
    # JRE version: 6.0_24-b24
    # Java VM: OpenJDK 64-Bit Server VM (20.0-b12 mixed mode linux-amd64 compressed oops)
    # Derivative: IcedTea6 1.11.1
    # Distribution: Fedora release 16 (Verne), package fedora-65.1.11.1.fc16-x86_64