Nginx slow static file serving (slower than node?)
@keenanLawrence mentioned in the comments above, sendfile_max_chunk
directive.
After setting sendfile_max_chunk
to 512k
, I saw a significant speed improvement in my static file (from disk) delivery from Nginx.
I experimented with it from 8k
, 32k
, 128k
, & finally 512k
The difference seems to be per server for configuration on the optimal chunk size
depending on the content being delivered, threads available, & server request load.
I also noticed another significant bump in performance when I changed worker_processes auto;
to worker_processes 2;
which went from utilizing worker_process
on every cpu to only using 2
. In my case, this was more efficient since I also have Node.js app servers running on the same machine and they are also performing operations on the cpu's.
Cory Robinson
By Day: I work as a consultant web engineer By Night: I right bad-ass code while watching Netflix and drinking coffee and eating calamari I aspire to be a better and more proficient full stack javascript developer, every day. I enjoy coding because of the challenges, I don't like mundane module duplication coding ie: copy-paste-change a few lines. I look forward to new challenges, learning and implementing clever and efficient logic in my apps.
Updated on July 16, 2022Comments
-
Cory Robinson almost 2 years
I have a Node.js app server sitting behind an Nginx configuration that has been working well. I'm anticipating some load increase and figured I'd get ahead by setting up another Nginx to serve the static file on the Node.js app server. So, essentially I have setup Nginx reverse proxy in front of Nginx & Node.js.
When I reload Nginx and let it start serving the requests (
Nginx
<->Nginx
) on the routes/publicfile/
, I notice a SIGNIFICANT decrease in speed. Something that tookNginx
<->Node.js
around 3seconds not tookNginx
<->Nginx
~15seconds!I'm new to Nginx and have spent the better part of the day on this and finally decided to post for some community help. Thanks!
The web facing Nginx
nginx.conf
:http { # Main settings sendfile on; tcp_nopush on; tcp_nodelay on; client_header_timeout 1m; client_body_timeout 1m; client_header_buffer_size 2k; client_body_buffer_size 256k; client_max_body_size 256m; large_client_header_buffers 4 8k; send_timeout 30; keepalive_timeout 60 60; reset_timedout_connection on; server_tokens off; server_name_in_redirect off; server_names_hash_max_size 512; server_names_hash_bucket_size 512; # Log format log_format main '$remote_addr - $remote_user [$time_local] $request ' '"$status" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; log_format bytes '$body_bytes_sent'; access_log /var/log/nginx/access.log main; # Mime settings include /etc/nginx/mime.types; default_type application/octet-stream; # Compression gzip on; gzip_comp_level 9; gzip_min_length 512; gzip_buffers 8 64k; gzip_types text/plain text/css text/javascript application/x-javascript application/javascript; gzip_proxied any; # Proxy settings #proxy_redirect of; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass_header Set-Cookie; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; real_ip_header CF-Connecting-IP; # SSL PCI Compliance # - removed for brevity # Error pages # - removed for brevity # Cache proxy_cache_path /var/cache/nginx levels=2 keys_zone=cache:10m inactive=60m max_size=512m; proxy_cache_key "$host$request_uri $cookie_user"; proxy_temp_path /var/cache/nginx/temp; proxy_ignore_headers Expires Cache-Control; proxy_cache_use_stale error timeout invalid_header http_502; proxy_cache_valid any 3d; proxy_http_version 1.1; # recommended with keepalive connections # WebSocket proxying - from http://nginx.org/en/docs/http/websocket.html proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; map $http_upgrade $connection_upgrade { default upgrade; '' close; } map $http_cookie $no_cache { default 0; ~SESS 1; ~wordpress_logged_in 1; } upstream backend { # my 'backend' server IP address (local network) server xx.xxx.xxx.xx:80; } # Wildcard include include /etc/nginx/conf.d/*.conf; }
The web facing Nginx
Server
block that forwards the static files to the Nginx behind it (on another box):server { listen 80 default; access_log /var/log/nginx/nginx.log main; # pass static assets on to the app server nginx on port 80 location ~* (/min/|/audio/|/fonts/|/images/|/js/|/styles/|/templates/|/test/|/publicfile/) { proxy_pass http://backend; } }
And finally the "backend" server:
http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; sendfile_max_chunk 32; # server_tokens off; # server_names_hash_bucket_size 64; include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; server { root /home/admin/app/.tmp/public; listen 80 default; access_log /var/log/nginx/app-static-assets.log; location /publicfile { alias /home/admin/APP-UPLOADS; } } }
-
Keenan Lawrence over 7 yearsExcellent! I'm glad you've managed to improve your speeds. Yes, experimentation is the way to go because server configs are different. Thank you for mentioning the
worker_processes
directive, I hadn't thought about that.