running celery as daemon using supervisor is not working

13,558

Solution 1

I had the same problem,so I added

environment=C_FORCE_ROOT="yes" 

in my program config,but It didn't work so I used

environment=C_FORCE_ROOT="true"

it's working

Solution 2

You'll need to run celery with a non superuser account, Please remove following lines from your config:

user=root
environment=C_FORCE_ROOT="yes"
environment=HOME="/root",USER="root"

And the add these lines to your config, I assume that you use django as a non superuser and developers as the user group:

user=django
group=developers

Note that subprocesses will inherit the environment variables of the shell used to start supervisord except for the ones overridden here and within the program’s environment option. See supervisord documents.

So Please note that when you change environment variables via supervisor config files, Changes won't apply by running supervisorctl reread and supervisorctl reload . You should run supervisor from the very start by following command:

supervisord -c /path/to/config/file.conf

Solution 3

From this other thread on stackoverflow. I managed to add the following settings and it worked for me.

app.conf.update(
    CELERY_ACCEPT_CONTENT = ['json'],
    CELERY_TASK_SERIALIZER = 'json',
    CELERY_RESULT_SERIALIZER = 'json',
)
Share:
13,558
Shiva Krishna Bavandla
Author by

Shiva Krishna Bavandla

I love to work on python and django using jquery and ajax.

Updated on July 26, 2022

Comments

  • Shiva Krishna Bavandla
    Shiva Krishna Bavandla almost 2 years

    I have a django app in which it has a celery functionality, so i can able to run the celery sucessfully like below

    celery -A tasks worker --loglevel=info
    

    but as a known fact that we need to run it as a daemon and so i have written the below celery.conf file inside /etc/supervisor/conf.d/ folder

    ; ==================================
    ;  celery worker supervisor example
    ; ==================================
    
    [program:celery]
    ; Set full path to celery program if using virtualenv
    command=/root/Envs/proj/bin/celery -A app.tasks worker --loglevel=info
    
    user=root
    environment=C_FORCE_ROOT="yes"
    environment=HOME="/root",USER="root"
    directory=/root/apps/proj/structure
    numprocs=1
    stdout_logfile=/var/log/celery/worker.log
    stderr_logfile=/var/log/celery/worker.log
    autostart=true
    autorestart=true
    startsecs=10
    
    ; Need to wait for currently executing tasks to finish at shutdown.
    ; Increase this if you have very long running tasks.
    stopwaitsecs = 600
    
    ; When resorting to send SIGKILL to the program to terminate it
    ; send SIGKILL to its whole process group instead,
    ; taking care of its children as well.
    killasgroup=true
    
    ; if rabbitmq is supervised, set its priority higher
    ; so it starts first
    priority=998
    

    but when i tried to update the supervisor like supervisorctl reread and supervisorctl update i was getting the message from supervisorctl status

    celery                           FATAL      Exited too quickly (process log may have details)
    

    So i went to worker.log file and seen the error message as below

    Running a worker with superuser privileges when the
    worker accepts messages serialized with pickle is a very bad idea!
    
    If you really want to continue then you have to set the C_FORCE_ROOT
    environment variable (but please think about this before you do).
    
    User information: uid=0 euid=0 gid=0 egid=0
    

    So why it was complaining about C_FORCE_ROOT even though we had set it as environment variable inside supervisor conf file ? what am i doing wrong in the above conf file ?