CRITICAL WORKER TIMEOUT error on gunicorn django
I got similar problem. It solved for me to update the version of gunicorn to 19.9.0
gunicorn 19.9.0
and for others that might experience the same problem - make sure to add the timeout. I personally use
gunicorn app.wsgi:application -w 2 -b :8000 --timeout 120
Related videos on Youtube
Nazir Ahmed
I work at BrainPlow Software-house in different languages Python for ML, Django for writing API's and Angular-4 for front-end.
Updated on September 15, 2022Comments
-
Nazir Ahmed over 1 year
I am trying to tarined word2vec model and save it and then create some cluster based on that modal, it run locally fine but when I create the docker image and run with gunicorn, It always giving me timeout error, I tried the described solutions here but it didn't workout for me
I am using
python 3.5 gunicorn 19.7.1 gevent 1.2.2 eventlet 0.21.0
here is my gunicorn.conf file
#!/bin/bash # Start Gunicorn processes echo Starting Gunicorn. exec gunicorn ReviewsAI.wsgi:application \ --bind 0.0.0.0:8000 \ --worker-class eventlet --workers 1 --timeout 300000 --graceful-timeout 300000 --keep-alive 300000
I also tried worker classes of
gevent,sync
also but it didn't work. can anybody tell me why this timeout error keep occuring. thanksHere is my log
Starting Gunicorn. [2017-11-10 06:03:45 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2017-11-10 06:03:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1) [2017-11-10 06:03:45 +0000] [1] [INFO] Using worker: eventlet [2017-11-10 06:03:45 +0000] [8] [INFO] Booting worker with pid: 8 2017-11-10 06:05:00,307 : INFO : collecting all words and their counts 2017-11-10 06:05:00,309 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types 2017-11-10 06:05:00,737 : INFO : collected 11927 word types from a corpus of 1254665 raw words and 126 sentences 2017-11-10 06:05:00,738 : INFO : Loading a fresh vocabulary 2017-11-10 06:05:00,916 : INFO : min_count=1 retains 11927 unique words (100% of original 11927, drops 0) 2017-11-10 06:05:00,917 : INFO : min_count=1 leaves 1254665 word corpus (100% of original 1254665, drops 0) 2017-11-10 06:05:00,955 : INFO : deleting the raw counts dictionary of 11927 items 2017-11-10 06:05:00,957 : INFO : sample=0.001 downsamples 59 most-common words 2017-11-10 06:05:00,957 : INFO : downsampling leaves estimated 849684 word corpus (67.7% of prior 1254665) 2017-11-10 06:05:00,957 : INFO : estimated required memory for 11927 words and 200 dimensions: 25046700 bytes 2017-11-10 06:05:01,002 : INFO : resetting layer weights 2017-11-10 06:05:01,242 : INFO : training model with 4 workers on 11927 vocabulary and 200 features, using sg=0 hs=0 sample=0.001 negative=5 window=4 2017-11-10 06:05:02,294 : INFO : PROGRESS: at 6.03% examples, 247941 words/s, in_qsize 0, out_qsize 7 2017-11-10 06:05:03,423 : INFO : PROGRESS: at 13.65% examples, 269423 words/s, in_qsize 0, out_qsize 7 2017-11-10 06:05:04,670 : INFO : PROGRESS: at 23.02% examples, 286330 words/s, in_qsize 8, out_qsize 11 2017-11-10 06:05:05,745 : INFO : PROGRESS: at 32.70% examples, 310218 words/s, in_qsize 0, out_qsize 7 2017-11-10 06:05:07,054 : INFO : PROGRESS: at 42.06% examples, 308128 words/s, in_qsize 8, out_qsize 11 2017-11-10 06:05:08,123 : INFO : PROGRESS: at 51.75% examples, 320675 words/s, in_qsize 0, out_qsize 7 2017-11-10 06:05:09,355 : INFO : PROGRESS: at 61.11% examples, 320556 words/s, in_qsize 8, out_qsize 11 2017-11-10 06:05:10,436 : INFO : PROGRESS: at 70.79% examples, 328012 words/s, in_qsize 0, out_qsize 7 2017-11-10 06:05:11,663 : INFO : PROGRESS: at 80.16% examples, 327237 words/s, in_qsize 8, out_qsize 11 2017-11-10 06:05:12,752 : INFO : PROGRESS: at 89.84% examples, 332298 words/s, in_qsize 0, out_qsize 7 2017-11-10 06:05:13,784 : INFO : PROGRESS: at 99.21% examples, 336724 words/s, in_qsize 0, out_qsize 9 2017-11-10 06:05:13,784 : INFO : worker thread finished; awaiting finish of 3 more threads 2017-11-10 06:05:13,784 : INFO : worker thread finished; awaiting finish of 2 more threads 2017-11-10 06:05:13,784 : INFO : worker thread finished; awaiting finish of 1 more threads 2017-11-10 06:05:13,784 : INFO : worker thread finished; awaiting finish of 0 more threads 2017-11-10 06:05:13,784 : INFO : training on 6273325 raw words (4248672 effective words) took 12.5s, 339100 effective words/s 2017-11-10 06:05:13,785 : INFO : saving Word2Vec object under trained_models/mobile, separately None 2017-11-10 06:05:13,785 : INFO : not storing attribute syn0norm 2017-11-10 06:05:13,785 : INFO : not storing attribute cum_table 2017-11-10 06:05:14,026 : INFO : saved trained_models/mobile [2017-11-10 06:05:43 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:8) 2017-11-10 06:05:43,712 : INFO : precomputing L2-norms of word weight vectors [2017-11-10 06:05:44 +0000] [14] [INFO] Booting worker with pid: 14
-
C14L over 5 yearsIn your "gunicorn.conf" file, did you add the backslashes for line continuation of the
exec gunicorn ...
parameters? They are only present in the first two lines in the snippet you posted. -
praneeth over 5 yearsIt depends on how long it takes to train the model. You could try increasing the timeout.
-
-
hky404 about 5 yearsthat
--timeout 120
flag what fixed the issue for me. Thanks a lot!! -
mLstudent33 almost 3 yearswhat does timeout do? I'm having somewhat similar issues on Heroku with a Python Plotly Dash app.
-
palamunder almost 3 yearsdocs.gunicorn.org/en/stable/settings.html#timeout Default: 30 Workers silent for more than this many seconds are killed and restarted. Value is a positive number or 0. Setting it to 0 has the effect of infinite timeouts by disabling timeouts for all workers entirely. Generally, the default of thirty seconds should suffice. Only set this noticeably higher if you’re sure of the repercussions for sync workers. For the non sync workers it just means that the worker process is still communicating and is not tied to the length of time required to handle a single request.
-
miller almost 3 yearsMan you saved me! Thanks!