Jetty IOException: Too many open files

15,961

While there may be a bug in Jetty, I think a far more likely explanation is that your open file ulimits are too low. Typically the 1024 default is simply not enough for web servers with moderate use.

A good way to test this is to use apache bench to simulate the inbound traffic you're seeing. Running this on a remote host will generate 1000 requests each over 10 concurrent connections.

ab -c 10 -n 1000 [http://]hostname[:port]/path

Now count the sockets on your web server using netstat...

netstat -a | grep -c 192.168.1.100

Hopefully what you'll find is that your sockets will plateau at some value not dramatically larger than 1024 (mine is at 16384).

Another good thing to ensure is that connections are being closed properly in your business logic.

netstat -a | grep -c CLOSE_WAIT

If you see this number continue to grow over the lifecycle of your application, you may be missing a few calls to Connection.close().

Share:
15,961

Related videos on Youtube

John Smith
Author by

John Smith

Updated on June 01, 2022

Comments

  • John Smith
    John Smith about 2 years

    I'm running Jetty on a website doing around 100 requests/sec, with nginx in front. I just noticed in the logs, only a few minutes after doing a deploy and starting Jetty, that for a little while it was spamming:

    java.io.IOException: Too many open files
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:163)
        at org.mortbay.jetty.nio.SelectChannelConnector$1.acceptChannel(SelectChannelConnector.java:75)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:673)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
    

    For a minute or two. I did an "lsof -u jetty" and saw hundreds of lines of:

    java    15892 jetty 1020u  IPv6          298105434        0t0       TCP 192.168.1.100:http-alt->192.168.1.100:60839 (ESTABLISHED)
    java    15892 jetty 1021u  IPv6          298105438        0t0       TCP 192.168.1.100:http-alt->192.168.1.100:60841 (ESTABLISHED)
    java    15892 jetty 1022u  IPv6          298105441        0t0       TCP 192.168.1.100:http-alt->192.168.1.100:60842 (ESTABLISHED)
    java    15892 jetty 1023u  IPv6          298105443        0t0       TCP 192.168.1.100:http-alt->192.168.1.100:60843 (ESTABLISHED)
    

    Where 192.168.1.100 is the servers internal IP.

    As you can see, this brought the number of open files to the default max of 1024. I could just increase this, but I'm wondering why this happens in the first place? It's in Jetty's nio socket acceptor, so is this caused by a storm of connection requests?

    • extraneon
      extraneon about 13 years
      Every socket is a file, so every connection has a file (descriptor) even if it's waiting. What does a request typically do, and how long does it take? With 100 requests/second on jetty, querying a local db server taking 2 s / request you already have 400 "files".
    • Jan Vladimir Mostert
      Jan Vladimir Mostert about 13 years
      I'm getting something similar in Tomcat6 from time to time, initially thought it was the operating system throwing its toys. Also just increased the limit as a temp solution.
  • jpredham
    jpredham about 13 years
    Additionally, if it becomes apparent that Full GC's are really taking up too much time, take a look at java's ConcurrentMarkSweep collector.