Slow download big static files from nginx

22,072

Solution 1

An answer for anyone here through Google:

Sendfile is blocking, and doesn't enable nginx to set lookahead, thus it's very inefficient if a file is only read once.

Sendfile relies on filesystem caching etc' and was never made for such large files.

What you want is to disable sendfile for large files, and use directio (preferably with threads so it's non-blocking) instead. Any files under 16MB will still be read using sendfile.

aio threads;
directio 16M;
output_buffers 2 1M;

sendfile on;
sendfile_max_chunk 512k;

By using directio you read directly from the disk, skipping many steps on the way.

p.s. Please note that to use aio threads you need to compile nginx with threads support https://www.nginx.com/blog/thread-pools-boost-performance-9x/

Solution 2

You probably need to change sendfile_max_chunk value, as the documentation states :

Syntax:   sendfile_max_chunk size;
Default:  sendfile_max_chunk 0;
Context:  http, server, location

When set to a non-zero value, limits the amount of data that can be transferred in a single sendfile() call. Without the limit, one fast connection may seize the worker process entirely.

You may also want to adjust buffer sizes in case most of your traffic is "big" static files.

Solution 3

Have you tried tuning MTU (Maximum Transfer Unit) - the size of the largest network layer protocol data unit that can be communicated in a single network transaction? In our case, switching it from 1500 to 4000 bytes drastically improved the download performance. MTUs supported differs based on IP Transport. Try different values assessing what size makes sense in your use case.

You can use ifconfig to check existing MTU size and use following command to update it at runtime:

ifconfig eth0 mtu 5000

Also visit this very useful article on all things How to transfer large amounts of data via network?

Share:
22,072

Related videos on Youtube

Marcin Martynowski
Author by

Marcin Martynowski

Updated on September 18, 2022

Comments

  • Marcin Martynowski
    Marcin Martynowski almost 2 years

    I'm using debian 7 x64 in vmware-esxi virtualization.

    Max download per client is 1mb/s and nginx no use more than 50mbps together and my question is what may cause so slow transfers?

    server

    **Settings for eth1:
        Supported ports: [ TP ]
        Supported link modes:   1000baseT/Full
                                10000baseT/Full**
    
    root@www:~# iostat
    Linux 3.2.0-4-amd64 (www)       09.02.2015      _x86_64_        (4 CPU)
    
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1,75    0,00    0,76    0,64    0,00   96,84
    
    Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
    sda             173,93      1736,11       219,06     354600      44744
    
    
    root@www:~# free -m
                 total       used       free     shared    buffers     cached
    Mem:         12048       1047      11000          0        106        442
    -/+ buffers/cache:        498      11549
    Swap:          713          0        713
    

    nginx.cof

    user www-data;
    worker_processes 4;
    pid /var/run/nginx.pid;
    
    events {
            worker_connections 3072;
            # multi_accept on;
    }
    
    http {
    
            ##
            # Basic Settings
            ##
    
            sendfile on;
            tcp_nopush on;
            tcp_nodelay on;
            keepalive_timeout 5;
            types_hash_max_size 2048;
            server_tokens off;
    
            # server_names_hash_bucket_size 64;
            # server_name_in_redirect off;
    
            include /etc/nginx/mime.types;
            default_type application/octet-stream;
    
            ##
            # Logging Settings
            ##
    
            access_log /var/log/nginx/access.log;
            error_log /var/log/nginx/error.log;
    
            ##
            # Gzip Settings
            ##
    
            gzip on;
            gzip_disable "msie6";
    
            # gzip_vary on;
            # gzip_proxied any;
            # gzip_comp_level 6;
            # gzip_buffers 16 8k;
            # gzip_http_version 1.1;
            # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
    
            ##
            # nginx-naxsi config
            ##
            # Uncomment it if you installed nginx-naxsi
            ##
    
            #include /etc/nginx/naxsi_core.rules;
    
            ## Start: Size Limits & Buffer Overflows ##
    
            client_body_buffer_size 1k;
            client_header_buffer_size 1k;
            client_max_body_size 4M;
            large_client_header_buffers 2 1k;
    
            ## END: Size Limits & Buffer Overflows ##
    
            ## Start: Timeouts ##
    
            client_body_timeout   10;
            client_header_timeout 10;
            send_timeout          10;
    
            ## End: Timeouts ##
    
            ## END: Size Limits & Buffer Overflof
            ##
            # nginx-passenger config
            ##
            # Uncomment it if you installed nginx-passenger
            ##
    
            #passenger_root /usr;
            #passenger_ruby /usr/bin/ruby;
    
            ##
            # Virtual Host Configs
            ##
    
            include /etc/nginx/conf.d/*.conf;
            include /etc/nginx/sites-enabled/*;
    }
    

    /etc/sysctl.conf

    # Increase system IP port limits to allow for more connections
    
    net.ipv4.ip_local_port_range = 2000 65000
    
    
    net.ipv4.tcp_window_scaling = 1
    
    
    # number of packets to keep in backlog before the kernel starts dropping them
    net.ipv4.tcp_max_syn_backlog = 3240000
    
    
    # increase socket listen backlog
    net.core.somaxconn = 3240000
    net.ipv4.tcp_max_tw_buckets = 1440000
    
    
    # Increase TCP buffer sizes
    net.core.rmem_default = 8388608
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_wmem = 4096 65536 16777216
    

    UPDATE :

    Debug log is completely empty, only when I manually cancel the download I get the following error

    2015/02/09 20:05:32 [info] 4452#0: *2786 client prematurely closed connection while sending response to client, client: 83.11.xxx.xxx, server: xxx.com, request: "GET filename HTTP/1.1", host: "xxx.com"
    

    curl output:

      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 1309M  100 1309M    0     0   374M      0  0:00:03  0:00:03 --:--:--  382M
    
    • BE77Y
      BE77Y over 9 years
      Can you make it more clear what you're asking please?
    • Michael Hampton
      Michael Hampton over 9 years
      Please post the relevant server block.
    • Marcin Martynowski
      Marcin Martynowski over 9 years
      Of course, sorry. My question is what may cause so slow transfers on my dedicated-server.
  • Marcin Martynowski
    Marcin Martynowski over 9 years
    sendfile_max_chunk 512k;
  • Xavier Lucas
    Xavier Lucas over 9 years
    @MarcinMartynowski Okay, where's the server hosted ? Can you set the debug log and post the content for such a download ? What if you curl directly locally ?
  • Marcin Martynowski
    Marcin Martynowski over 9 years
    Ovh, of course but what debug log do you mean?
  • Xavier Lucas
    Xavier Lucas over 9 years
    @MarcinMartynowski This one
  • Marcin Martynowski
    Marcin Martynowski over 9 years
    done, I'v attached requested info in answer
  • Xavier Lucas
    Xavier Lucas over 9 years
    @MarcinMartynowski That's not normal to have an empty debug log, this means you are not using the binary compiled with the debug option. However, this may not be the way to track down the issue as your curl output clearly shows that it's not related to nginx but to the upstream network (bandwith/bursts quota, firewalls etc).
  • ThorSummoner
    ThorSummoner over 7 years
    without aio turning on directio seemed to only make package nginx-extras (debian:jessie) perform slower at serving a 500MB all zeros file. With out gzip, and stock sendfile I got ~0.17 Sec 500 MB local transfer to dev null; with directio that was ~1.6 Sec without gzip, and ~1.5 Sec with gzip. Debian's but trackers seems to report that aio is at fault for instabilities, and therefore not supported in debian:jessie.
  • DannyZB
    DannyZB over 7 years
    That is for low-load use cases. The issues with sendfile arise when there are many files read in parallel - only then does blocking take effect. To truly test this try downloading 20 files in parallel and see which performs better. The use-cases I've run are on RH systems that are live and hands-down Threads+DirectIO gave superior performance.. This really needs some extensive testing. There must be many factors at play - what HDD did you use? was it with RAID? what was the cache size?
  • ThorSummoner
    ThorSummoner over 7 years
    Thanks for mentioning the high-load performance case. My scenario is a far more armature, playing around really. Zero-load use case, hosted on a single 7200rpm spinning-rust disk, no raid, 16384 KBytes cache. A not- real benchmark gives me: nginx sendfile over curl ~0.2 sec, cp to /dev/null gives ~0.09 sec; for that same 500mb bin. Under zero-load Nginx is not realistically the bottle neck, probably using http is.
  • DannyZB
    DannyZB over 7 years
    In a server, you are likely to have a larger disk-cache, more towards the 80MB size. Add that to multiple running disks or even SW Raid which are common setups. The single desktop-grade HDD scenario is not real-world. When you throw multiple HDD + disk cache in the game it changes considerably. Raid10, or worse, Raid5 are horrific when it comes to creating fragmentation and random reads - which are the greatest SATA weakness.
  • DannyZB
    DannyZB over 7 years
    p.s. an important thing to note is testing sendfile while the file in question is not in the system cache. In your test case it probably was. Try creating a 20GB file then resetting the system and testing again.
  • Pavel K
    Pavel K over 2 years
    Use it carefully, it could cause permanent gateway timeout depending on your network configuration