"768 worker_connections are not enough" error after a fresh installation of nginx and ROR
Solution 1
The proxy_set_header
and proxy_redirect
directives should be in location @ruby
block.
For upstream
block, you should use localhost
and the actual port of your Ruby server. Without port, the upstream would connect to this nginx server instance, which is a loop.
Solution 2
Old question, but i had the same issue and the accepted answer didnt work for me.
I had to increase the number of worker_connections, as stated here.
/etc/nginx/nginx.conf
events {
worker_connections 20000;
}
Related videos on Youtube
Purres
Updated on September 18, 2022Comments
-
Purres over 1 year
I have a fresh installation of nginx and ruby on rails. But it gives me a '500 Internal Server Error' while testing.
The error.log for my app has the following:
2014/05/01 17:27:15 [alert] 1423#0: *6892 768 worker_connections are not enough while connecting to upstream, client: 24.15.27.113, server: example.com, request: "GET / HTTP/1.0", upstream: "http://24.15.27.113:80/", host: "myapp" 2014/05/01 17:27:16 [alert] 1423#0: *7656 768 worker_connections are not enough while connecting to upstream, client: 24.15.27.113, server: example.com, request: "GET /favicon.ico HTTP/1.0", upstream: "http://24.15.27.113:80/favicon.ico", host: "myapp" 2014/05/01 17:45:50 [alert] 1453#0: *766 768 worker_connections are not enough while connecting to upstream, client: 24.15.27.113, server: example.com, request: "GET / HTTP/1.0", upstream: "http://24.15.27.113:80/", host: "myapp" 2014/05/01 17:45:50 [alert] 1453#0: *1530 768 worker_connections are not enough while connecting to upstream, client: 24.15.27.113, server: example.com, request: "GET /favicon.ico HTTP/1.0", upstream: "http://24.15.27.113:80/favicon.ico", host: "myapp"
The nginx.conf has the following:
user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; fastcgi_buffers 8 16k; fastcgi_buffer_size 32k; }
myapp.example.com config file:
upstream myapp { #server 24.15.27.113; #server 24.15.27.113:3001; #server 24.15.27.113:3002; server 127.0.0.1:3000; server 127.0.0.1:3001; #server 127.0.0.1:3002; } server { listen 80; server_name .example.com; access_log /var/www/myapp.example.com/log/access.log; error_log /var/www/myapp.example.com/log/error.log; root /var/www/myapp.example.com; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; try_files /system/maintenance.html $uri $uri/index.html $uri.html @ruby; } location @ruby { proxy_pass http://myapp; } }
After switching back to use ip 127.0.0.1:3000 and 127.0.0.1:3001 inside upstream block, the server generated the errors:
2014/05/05 10:34:39 [error] 6158#0: *2 no live upstreams while connecting to upstream, client: 52.74.130.210, server: example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:3001”, host: "24.15.27.113" 2014/05/05 10:34:39 [error] 6158#0: *2 no live upstreams while connecting to upstream, client: 52.74.130.210, server: example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "24.15.27.113" 2014/05/05 10:34:39 [error] 6158#0: *2 no live upstreams while connecting to upstream, client: 52.74.130.210, server: example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://myapp/favicon.ico", host: "24.15.27.113"
Update 05/05/2014: I ran the following command to check the connection:
telnet 127.0.0.1 3000
and the result was:
Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused
I tried to restart the thin server, but got error messages.
thin restart -C /etc/thin/myapp.example.com -o 3000
Error:
/usr/local/rvm/gems/ruby-2.1.1/gems/thin-1.6.2/lib/thin/daemonizing.rb:129:in `send_signal': Can't stop process, no PID found in tmp/pids/thin.3000.pid (Thin::PidFileNotFound) from /usr/local/rvm/gems/ruby-2.1.1/gems/thin-1.6.2/lib/thin/daemonizing.rb:111:in `kill' from /usr/local/rvm/gems/ruby-2.1.1/gems/thin-1.6.2/lib/thin/controllers/controller.rb:94:in `block in stop'
-
Tero Kilkanen about 10 yearsWhat is your full nginx configuration? How much traffic do you have to the server?
-
Purres about 10 yearsI added with the full nginx configuration. There's zero traffic as it is a new server with a new ip. The page error appeared right after I restarted nginx.
-
Tero Kilkanen about 10 yearsAnd the vhost configuration?
-
Purres about 10 yearsI updated the original post. I tried with multiple ips and ports. But the results were the same.
-
-
Purres about 10 yearsI updated the @ruby block and the upstream block, but now I received 111: Connection refused error. I added them into the original post. Any idea?
-
Tero Kilkanen about 10 yearsAre you sure your upstream ROR server is running and reachable on ports 3000 / 3001 on the server? You can use
telnet 127.0.0.1 3000
on the server to check this. -
Purres about 10 yearsI used telnet and the connections were refused. I tried restart the thin server but got new error messages. I updated the original post.
-
Tero Kilkanen about 10 yearsYou should check on your ROR configuration closely. I am not familiar with that though.