Docker&Celery - ERROR: Pidfile (celerybeat.pid) already exists

13,577

Solution 1

Another solution (taken from https://stackoverflow.com/a/17674248/39296) is to use --pidfile= (with no path) to not create a pidfile at all. Same effect as Siyu's answer above.

Solution 2

I believe there is a pidfile in your project directory ./ then when you run the container, it's mounted in. (therefore RUN find . -type f -name "celerybeat.pid" -exec rm -f {} \; had no effect).

You can use celery --pidfile=/opt/celeryd.pid to specify a non mounted path so that it is not mirror on the host.

Solution 3

Although not professional in the slightest, I found adding:

celerybeat.pid

to my .dockerignore file was what fixed said issues.

Solution 4

I had this error with Airflow when I run it with docker-compose.

If you don't care about the current status of your Airflow, you can just delete airflow containers.

docker rm containerId

And after that, start the Airflow again:

docker-compose up

Solution 5

Other way, create a django command celery_kill.py

import shlex
import subprocess

from django.core.management.base import BaseCommand


class Command(BaseCommand):
    def handle(self, *args, **options):
        kill_worker_cmd = 'pkill -9 celery'
        subprocess.call(shlex.split(kill_worker_cmd))

docker-compose.yml :

  celery:
    build: ./src
    restart: always
    command: celery -A project worker -l info
    volumes:
      - ./src:/var/lib/celery/data/
    depends_on:
      - db
      - redis
      - app

  celery-beat:
    build: ./src
    restart: always
    command: celery -A project beat -l info --pidfile=/tmp/celeryd.pid
    volumes:
      - ./src:/var/lib/beat/data/
    depends_on:
      - db
      - redis
      - app

and Makefile:

run:
    docker-compose up -d --force-recreate
    docker-compose exec app python manage.py celery_kill
    docker-compose restart
    docker-compose exec app python manage.py migrate
Share:
13,577
Artur Drożdżyk
Author by

Artur Drożdżyk

Updated on June 14, 2022

Comments

  • Artur Drożdżyk
    Artur Drożdżyk about 2 years

    Application consists of: - Django - Redis - Celery - Docker - Postgres

    Before merging the project into docker, everything was working smooth and fine, but once it has been moved into containers, something wrong started to happen. At first it starts perfectly fine, but after a while I do receive folowing error:

    celery-beat_1  | ERROR: Pidfile (celerybeat.pid) already exists.
    

    I've been struggling with it for a while, but right now I literally give up. I've no idea of what is wrong with it.

    Dockerfile:

    FROM python:3.7
    
    ENV PYTHONUNBUFFERED 1
    RUN mkdir -p /opt/services/djangoapp/src
    
    
    COPY /scripts/startup/entrypoint.sh entrypoint.sh
    RUN chmod +x /entrypoint.sh
    ENTRYPOINT ["/entrypoint.sh"]
    
    COPY Pipfile Pipfile.lock /opt/services/djangoapp/src/
    WORKDIR /opt/services/djangoapp/src
    RUN pip install pipenv && pipenv install --system
    
    COPY . /opt/services/djangoapp/src
    
    RUN find . -type f -name "celerybeat.pid" -exec rm -f {} \;
    
    RUN sed -i "s|django.core.urlresolvers|django.urls |g" /usr/local/lib/python3.7/site-packages/vanilla/views.py
    RUN cp /usr/local/lib/python3.7/site-packages/celery/backends/async.py /usr/local/lib/python3.7/site-packages/celery/backends/asynchronous.py
    RUN rm /usr/local/lib/python3.7/site-packages/celery/backends/async.py
    RUN sed -i "s|async|asynchronous|g" /usr/local/lib/python3.7/site-packages/celery/backends/redis.py
    RUN sed -i "s|async|asynchronous|g" /usr/local/lib/python3.7/site-packages/celery/backends/rpc.py
    
    RUN cd app && python manage.py collectstatic --no-input
    
    
    
    EXPOSE 8000
    CMD ["gunicorn", "-c", "config/gunicorn/conf.py", "--bind", ":8000", "--chdir", "app", "example.wsgi:application", "--reload"]
    

    docker-compose.yml:

    version: '3'
    
    services:
    
      djangoapp:
        build: .
        volumes:
          - .:/opt/services/djangoapp/src
          - static_volume:/opt/services/djangoapp/static  # <-- bind the static volume
          - media_volume:/opt/services/djangoapp/media  # <-- bind the media volume
          - static_local_volume:/opt/services/djangoapp/src/app/static
          - media_local_volume:/opt/services/djangoapp/src/app/media
          - .:/code
        restart: always
        networks:
          - nginx_network
          - database1_network # comment when testing
          # - test_database1_network # uncomment when testing
          - redis_network
        depends_on:
          - database1 # comment when testing
          # - test_database1 # uncomment when testing
          - migration
          - redis
    
      # base redis server
      redis:
        image: "redis:alpine"
        restart: always
        ports: 
          - "6379:6379"
        networks:
          - redis_network
        volumes:
          - redis_data:/data
    
      # celery worker
      celery:
        build: .
        command: >
          bash -c "cd app && celery -A example worker --without-gossip --without-mingle --without-heartbeat -Ofair"
        volumes:
          - .:/opt/services/djangoapp/src
          - static_volume:/opt/services/djangoapp/static  # <-- bind the static volume
          - media_volume:/opt/services/djangoapp/media  # <-- bind the media volume    
          - static_local_volume:/opt/services/djangoapp/src/app/static
          - media_local_volume:/opt/services/djangoapp/src/app/media
        networks:
          - redis_network
          - database1_network # comment when testing
          # - test_database1_network # uncomment when testing
        restart: always
        depends_on:
          - database1 # comment when testing
          # - test_database1 # uncomment when testing
          - redis
        links:
          - redis
    
      celery-beat:
        build: .
        command: >
          bash -c "cd app && celery -A example beat"
        volumes:
          - .:/opt/services/djangoapp/src
          - static_volume:/opt/services/djangoapp/static  # <-- bind the static volume
          - media_volume:/opt/services/djangoapp/media  # <-- bind the media volume
          - static_local_volume:/opt/services/djangoapp/src/app/static
          - media_local_volume:/opt/services/djangoapp/src/app/media
        networks:
          - redis_network
          - database1_network # comment when testing
          # - test_database1_network # uncomment when testing
        restart: always
        depends_on:
          - database1 # comment when testing
          # - test_database1 # uncomment when testing
          - redis
        links:
          - redis
    
      # migrations needed for proper db functioning
      migration:
        build: .
        command: >
          bash -c "cd app && python3 manage.py makemigrations && python3 manage.py migrate"
        depends_on:
          - database1 # comment when testing
          # - test_database1 # uncomment when testing
        networks:
         - database1_network # comment when testing
         # - test_database1_network # uncomment when testing
    
      # reverse proxy container (nginx)
      nginx:
        image: nginx:1.13
        ports:
          - 80:80
        volumes:
          - ./config/nginx/conf.d:/etc/nginx/conf.d
          - static_volume:/opt/services/djangoapp/static  # <-- bind the static volume
          - media_volume:/opt/services/djangoapp/media  # <-- bind the media volume
          - static_local_volume:/opt/services/djangoapp/src/app/static
          - media_local_volume:/opt/services/djangoapp/src/app/media 
        restart: always
        depends_on:
          - djangoapp
        networks:
          - nginx_network
    
      database1: # comment when testing
        image: postgres:10 # comment when testing
        env_file: # comment when testing
          - config/db/database1_env # comment when testing
        networks: # comment when testing
          - database1_network # comment when testing
        volumes: # comment when testing
          - database1_volume:/var/lib/postgresql/data # comment when testing
    
      # test_database1: # uncomment when testing
        # image: postgres:10 # uncomment when testing
        # env_file: # uncomment when testing
          # - config/db/test_database1_env # uncomment when testing
        # networks: # uncomment when testing
          # - test_database1_network # uncomment when testing
        # volumes: # uncomment when testing
          # - test_database1_volume:/var/lib/postgresql/data # uncomment when testing
    
    
    networks:
      nginx_network:
        driver: bridge
      database1_network: # comment when testing
        driver: bridge # comment when testing
      # test_database1_network: # uncomment when testing
        # driver: bridge # uncomment when testing
      redis_network:
        driver: bridge
    volumes:
      database1_volume: # comment when testing
      # test_database1_volume: # uncomment when testing
      static_volume:  # <-- declare the static volume
      media_volume:  # <-- declare the media volume
      static_local_volume:
      media_local_volume:
      redis_data:
    

    Please, ignore "test_database1_volume" as it exists only for test purposes.

  • Matthew Hegarty
    Matthew Hegarty over 4 years
    This didn't work for me because the file persisted between container restarts. Instead I mounted a tmpfs directory (which is removed on container stop), and used --pidfile to point to a file in that location.
  • Alexandros
    Alexandros about 3 years
    I have fixed the problem by adding pidfile=/tmp/celeryd.pid" to the end of "celery -A proj beat -l info" en mi docker-compose as well as in your example thank you
  • Paweł Polewicz
    Paweł Polewicz about 3 years
    what if your machine crashes and docker-compose down is no longer an option?
  • kochul
    kochul about 3 years
    @PawełPolewicz in that case, fix the problem at docker-compose up timing(=Solution 1)
  • Ershan
    Ershan over 2 years
    @Alexandros Specifying it as the /tmp directory is not a good idea. Because the default /tmp directory will be cleaned regularly, this will cause your celery program to be forced to restart or exit abnormally, if you do not change the default policy.