uWSGI and NGINX 502: upstream prematurely closed connection
From https://monicalent.com/blog/2013/12/06/set-up-nginx-and-uwsgi/ I have found out "limit-as" option which restrict process virtual memory size and may be responsible for 502 error code with message "upstream prematurely closed connection".
Related videos on Youtube
Djent
Updated on September 18, 2022Comments
-
Djent over 1 year
I have a Kubernetes cluster which is running Django application within docker container served by uWSGI. The ingress controller is ingress-nginx (this one: https://github.com/kubernetes/ingress-nginx).
Recently I did an upgrade of the whole cluster from 1.9 to 1.11, and due to some issues I had to run
kubeadm reset
andkubeadm init
again.Since then (I guess), sometimes I'm getting weird 502 errors that are reported by the users:
upstream prematurely closed connection while reading response header from upstream
.The biggest problem for me is that those requests are not visible in uWSGI logs within container so I have no idea what is happening.
Here is my uwsgi.ini file:
[uwsgi] http = 0.0.0.0:8000 # Django-related settings # the base directory (full path) chdir = /app # Django's wsgi file module = in_web_server.wsgi:application pythonpath = /app static-map = /static=/app/static # process-related settings # master master = true # maximum number of worker processes processes = 10 # clear environment on exit vacuum = true # spooler setup spooler = /spooler spooler-processes = 2 spooler-frequency = 10
Dockerfile CMD:
CMD ["/usr/local/bin/uwsgi", "--ini", "/app/in_web_server/docker/in/in_web_server_uwsgi.ini"]
Kubernetes Ingress:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: in-debug namespace: in-debug annotations: nginx.ingress.kubernetes.io/proxy-body-size: "0" nginx.ingress.kubernetes.io/proxy-read-timeout: "3600" nginx.ingress.kubernetes.io/proxy-send-timeout: "3600" nginx.ingress.kubernetes.io/configuration-snippet: | if ($request_uri ~ "^[^?]*?//") { rewrite "^" $scheme://$host$uri permanent; } spec: rules: - host: test-in http: paths: - path: "/" backend: serviceName: in-debug servicePort: 8000
Those errors are only for larger (but not very large) PUT requests. By larger I mean ~300KB, so it is not a big deal.
Also, the 502 error is returned after around 1 minute, so there is possibly some timeout issue. However I'm not able to locate it since there is no trace within uwsgi log. Any hints what I'm doing wrong?
-
Diego Gallegos over 5 yearsWhere you able to find it the solution @Djent? Is there a load balancer in front of Kubernetes?
-
dina over 5 years@DiegoGallegos ,Djent have you found a solution?
-
Djent over 5 yearsNo, I haven't found solution, but it stopped at some point. I have no idea what helped.
-
-
Djent over 5 yearsUnfortunately, it doesn't help. None of it helps.