Nginx + uWSGI + Django performance stuck on 100rq/s

7,992

Clearly, whatever task you're using is CPU bound. You may want to consider profiling your Django app to find out where your application is lagging. There are several profiling solutions for Python WSGI applications (although Django is not strictly WSGI compliant, especially with middleware, so YMMV):

  1. linesman (shameless plug, this is my project!)
  2. keas.profile
  3. repoze.profile
  4. dozer (but you'll need to use the 0.2 alpha)

This will allow you to identify bottlenecks in your application--i.e., in which functions is your application spending most of its time?

Another thing to check is, how long does it take for uwsgi/nginx to pick up a request? Are requests being queued up? How long does the average request take from start to finish? Also, more importantly, what's your baseline? Try running the same test with 1 concurrent user to find this out. Then, gradually increase the number of users until you can identify where the number of users reaches a peak.

With this information, you can start to establish a pattern--and that is the key to load testing!

Good luck!

Share:
7,992
dancio
Author by

dancio

Updated on September 18, 2022

Comments

  • dancio
    dancio over 1 year

    I have configured Nginx with uWSGI and Django on CentOS 6 x64 (3.06GHz i3 540, 4GB), which should easily handle 2500 rq/s but when I run ab test ( ab -n 1000 -c 100 ) performance stops at 92 - 100 rq/s.

    Nginx:
        user nginx;
        worker_processes 2;
        events {
            worker_connections 2048;
            use epoll;
        }
    
    uWSGI:
    
        Emperor
        /usr/sbin/uwsgi --master --no-orphans --pythonpath /var/python --emperor /var/python/*/uwsgi.ini
    
    
    [uwsgi]
    socket = 127.0.0.2:3031
    master = true
    processes = 5
    
    env = DJANGO_SETTINGS_MODULE=x.settings
    env = HTTPS=on
    module = django.core.handlers.wsgi:WSGIHandler()
    
    disable-logging = true
    catch-exceptions = false
    
    post-buffering = 8192
    harakiri = 30
    harakiri-verbose = true
    
    vacuum = true
    listen = 500
    optimize = 2
    
    sysclt changes:
    # Increase TCP max buffer size setable using setsockopt()
    net.ipv4.tcp_rmem = 4096 87380 8388608
    net.ipv4.tcp_wmem = 4096 87380 8388608
    
    net.core.rmem_max = 8388608
    net.core.wmem_max = 8388608
    net.core.netdev_max_backlog = 5000
    net.ipv4.tcp_max_syn_backlog = 5000
    net.ipv4.tcp_window_scaling = 1
    net.core.somaxconn = 2048
    
    # Avoid a smurf attack
    net.ipv4.icmp_echo_ignore_broadcasts = 1
    
    # Optimization for port usefor LBs
    # Increase system file descriptor limit
    fs.file-max = 65535
    

    I did sysctl -p to enable changes.

    Idle server info:

    top - 13:34:58 up 102 days, 18:35,  1 user,  load average: 0.00, 0.00, 0.00
    
    Tasks: 118 total,   1 running, 117 sleeping,   0 stopped,   0 zombie
    
    Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
    
    Mem:   3983068k total,  2125088k used,  1857980k free,   262528k buffers
    
    Swap:  2104504k total,        0k used,  2104504k free,   606996k cached
    
    
    
    free -m
    
     total       used       free     shared    buffers     cached
    
    Mem:          3889       2075       1814          0        256        592
    
    -/+ buffers/cache:       1226       2663
    
    Swap:         2055          0       2055
    
    
    
    
    **During the test:**
    
    top - 13:45:21 up 102 days, 18:46,  1 user,  load average: 3.73, 1.51, 0.58
    
    Tasks: 122 total,   8 running, 114 sleeping,   0 stopped,   0 zombie
    
    Cpu(s): 93.5%us,  5.2%sy,  0.0%ni,  0.2%id,  0.0%wa,  0.1%hi,  1.1%si,  0.0%st
    
    Mem:   3983068k total,  2127564k used,  1855504k free,   262580k buffers
    
    Swap:  2104504k total,        0k used,  2104504k free,   608760k cached
    
    
    free -m
    
    total       used       free     shared    buffers     cached
    
    Mem:          3889       2125       1763          0        256        595
    
    -/+ buffers/cache:       1274       2615
    
    Swap:         2055          0       2055
    
    
    iotop
    
    30141 be/4 nginx       0.00 B/s    7.78 K/s  0.00 %  0.00 % nginx: wo~er process
    

    Where is the bottleneck ? Or what am I doing wrong ?

    • Shish
      Shish over 12 years
      "Where is the bottleneck?" is a good question, so I'll ask it back to you in different words: is the CPU maxed out (see top)? Are the disks struggling (iotop)? Are you hitting swap (free)? Are you filling the network pipe (iftop)?
    • dancio
      dancio over 12 years
      I have posted some server stats, it seems that is cpu
    • dancio
      dancio over 12 years
      now its serving cached static content, and @sam Your comment is really helpful thanks
    • rszalski
      rszalski over 6 years
      Doesn't ab -n 1000 -c 100 mean: "create 1000 connections in batches of 100 (concurrently)"? This means that ab will effectively create only 100 req/sec.