Gunicorn worker terminated with signal 9
Solution 1
I encountered the same warning message.
[WARNING] Worker with pid 71 was terminated due to signal 9
I came across this faq, which says that "A common cause of SIGKILL is when OOM killer terminates a process due to low memory condition."
I used dmesg realized that indeed it was killed because it was running out of memory.
Out of memory: Killed process 776660 (gunicorn)
Solution 2
In our case application was taking around 5-7 minutes to load ML models and dictionaries into memory. So adding timeout period of 600 seconds solved the problem for us.
gunicorn main:app --workers 1 --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:8443 --timeout 600
Solution 3
I encountered the same warning message when I limit the docker's memory, use like -m 3000m
.
see docker-memory
and
gunicorn-Why are Workers Silently Killed?
The simple way to avoid this is set a high memory for docker or not set.
Solution 4
I was using AWS Beanstalk to deploy my flask application and I had a similar error.
- In the log I saw:
- web: MemoryError
- [CRITICAL] WORKER TIMEOUT
- [WARNING] Worker with pid XXXXX was terminated due to signal 9
I was using the t2.micro instance and when I changed it to t2.medium my app worked fine. In addition to this I changed to the timeout in my nginx config file.
Jodiug
Updated on April 21, 2022Comments
-
Jodiug about 2 years
I am running a Flask application and hosting it on Kubernetes from a Docker container. Gunicorn is managing workers that reply to API requests.
The following warning message is a regular occurrence, and it seems like requests are being canceled for some reason. On Kubernetes, the pod is showing no odd behavior or restarts and stays within 80% of its memory and CPU limits.
[2021-03-31 16:30:31 +0200] [1] [WARNING] Worker with pid 26 was terminated due to signal 9
How can we find out why these workers are killed?
-
lionbigcat almost 3 yearsDid you manage to find out why? Having the same issue, and tried specifying the
--shm-size
- but no avail. -
Jodiug almost 3 yearsOur problems seem to have gone away since we started using
--worker-class gevent
. I suspect Simon is right and this was either an out of memory error, or a background process running for too long and the main process (1) decided to kill it. -
Jodiug almost 3 yearsMeta: I'm not sure why this question is being downvoted. Please drop a comment if you feel it needs further clarification.
-
Blop almost 3 yearsI have the same problem, and gevent did not solve it. does anyone knows why this started all of a sudden? was there a change in gunicorn or in kube?
-
Blop almost 3 yearsalso related to a non answered question: stackoverflow.com/questions/57745100/…
-
lionbigcat almost 3 years@Blop - my issue was OOM-related. I had to use a larger instance with more RAM, and gave the docker container access to that RAM.
-
Blop almost 3 years@lionbigcat ye, eventually that's exactly what I did as well. just adding another 1GB fixed the problem. no need to change to gevent.
-
Vincent Agnes over 2 yearsI faced the same issue and solved it by switching from python 3.8 to python 3.7
-
-
Jodiug almost 3 yearsOur problems seem to have gone away since we started using
--worker-class gevent
. I can't verify this answer, but it seems thatdmesg
is a good way to get more information and diagnose the problem. Thanks for your answer! -
Areza over 2 yearshow did you fix it ?
-
EgurnovD over 2 yearsGot rid of warm-up. Looking for ways to do it right after app start now.
-
Martin Bucher about 2 yearsthat was it in my case as well. many thanks for the pointer.
-
Snehangsu almost 2 yearsMind sharing the timeout variable name?
-
Vkey almost 2 yearsBelow is the contents on my timeout.conf file under the nginx>conf.d folder keepalive_timeout 600s; proxy_connect_timeout 600s; proxy_send_timeout 600s; proxy_read_timeout 600s; fastcgi_send_timeout 600s; fastcgi_read_timeout 600s; client_max_body_size 20M;