Handling more than 1024 file descriptors, in C on Linux

12,861

Solution 1

Thanks for all your answers but I think I've found the culprit. After redefining __FD_SETSIZE in my program everything started to move a lot faster. Of course ulimit also needs to be raised, but without __FD_SETSIZE my program never takes advantage of it.

Solution 2

Please see the C10K problem page. It contains an in-depth discussion on how to achieve the '10000 simultaneous connections' goal, while maintaining high-performance and managing to serve each client.

It also contains information on how to increase the performance of your kernel when handling a large number of connections at once.

Share:
12,861
Andrioid
Author by

Andrioid

Born in Reykjavik, Iceland. Worked as a sysadmin (Linux/Cisco oriented) since 2001. Working as a software developer in Denmark. Interested in: Networking, Security, Programming, System Administration and VoIP.

Updated on June 16, 2022

Comments

  • Andrioid
    Andrioid almost 2 years

    I am working on a threaded network server using epoll (edge triggered) and threads and I'm using httperf to benchmark my server.

    So far, it's performing really well or almost exactly at the rate the requests are being sent. Until the 1024 barrier, where everything slows down to around 30 requests/second.

    Running on Ubuntu 9.04 64-bit.

    I've already tried:

    • Increasing the ulimit number of file descriptors, successfully. It just doesn't improve the performance above 1024 concurrent connections.

      andri@filefridge:~/Dropbox/School/Group 452/Code/server$ ulimit -n
      20000

    I am pretty sure that this slow-down is happening in the operating system as it happens before the event is sent to epoll (and yes, I've also increased the limit in epoll).

    I need to benchmark how many concurrent connections my program can handle until it starts to slow down (without the operating system interfering).

    How do I get my program to run with more than 1024 file descriptors?

    This limit is probably there for a reason, but for benchmarking purposes, I need it gone.

    Update

    Thanks for all your answers but I think I've found the culprit. After redefining __FD_SETSIZE in my program everything started to move a lot faster. Of course ulimit also needs to be raised, but without __FD_SETSIZE my program never takes advantage of it.

  • Lance Richardson
    Lance Richardson almost 15 years
    Using an FD_SET with fd's beyond __FD_SETSIZE causes data that happens to be after the FD_SET to be overwritten, which can cause plenty of hard-to-debug grief. I am a little curious why you are using an FD_SET with epoll (it would make sense for select() or poll()...)
  • Andrioid
    Andrioid almost 15 years
    It's not necessaryly epoll that is not receiving the file descriptors fast enough. It is quite possible that httperf (that uses select) is causing this limitation, I'll update my answer when I know more.
  • Prof. Falken
    Prof. Falken over 13 years
    I tried FD_SETSIZE instead of __FD_SETSIZE first, thanks for your answer!
  • Andrioid
    Andrioid almost 13 years
    I appreciate your answer, although this was a university project and I needed to benchmark my server.
  • SteveRawlinson
    SteveRawlinson over 11 years
    There are many, many applications that need more than 1024 file descriptors.
  • Steeve
    Steeve about 7 years
    @android facing slow response time issue. i have raised ulimit but not getting where to increase this __FD_SETSIZE . please tell which file needs to be edited