Celery - errno 111 connection refused

19,144

not sure if you fixed this already, but from the look of it, seems you have a bunch of problems.

first and foremost, check if your RabbitMQ server has troubles staying up for some reason.

also, be sure that your RabbitMQ server has been configured with the correct credentials and allow access from your worker's location (e.g. enable connections other than loopback users): here's what you need to do: https://www.rabbitmq.com/access-control.html

then, check you have configured your worker with the correct authentication credentials, a full URL should look similar to the following (where user must be granted access to the specific virtualhost, it's quite easy to configure it via the RabbitMQ management interface https://www.rabbitmq.com/management.html):

BROKER_URL = 'amqp://user:[email protected]:port/virtualhost' CELERY_RESULT_BACKEND = 'amqp://user:[email protected]:port/virtualhost'

and finally, try to traceback the exception in python, that should hopefully give you some additional information about the error

hth

p.s. re. demonizing your celery worker, @budulianin answer is spot on!

Share:
19,144

Related videos on Youtube

PythonEnthusiast
Author by

PythonEnthusiast

SOreadytohelp

Updated on June 24, 2022

Comments

  • PythonEnthusiast
    PythonEnthusiast 6 months

    My celery tasks stops getting executed in between. My rabbitmq stops in between and then I need to restart it manually. Last time(15-16 hours back), similar problem occurred, I did the following (manually), and it started working again.

    I reinstalled the rabbitmq and then it started working again.

    sudo apt-get --purge remove raabitmq-server

    sudo apt-get install raabitmq-server

    Now it is again showing `

    Celery - errno 111 connection refused
    

    Following is my config.

    BROKER_URL = 'amqp://'
    CELERY_RESULT_BACKEND = 'amqp://'
    CELERY_TASK_SERIALIZER = 'json'
    CELERY_RESULT_SERIALIZER = 'json'
    CELERY_ACCEPT_CONTENT=['json']
    CELERY_TIMEZONE = 'Europe/Oslo'
    CELERY_ENABLE_UTC = True
    CELERY_CREATE_MISSING_QUEUES = True
    

    Please let me know where I'm going wrong?

    How should I rectify it?

    Part2

    Also, I've multiple queues. I can run it within the project directory, but when demonizing, the workers dont take task. I still need to start the celery workers manually. How can I demozize it?

    Here is my celerd conf.

    # Name of nodes to start, here we have a single node
    CELERYD_NODES="w1 w2 w3 w4"
    CELERY_BIN="/usr/local/bin/celery"
    # Where to chdir at start.
    CELERYD_CHDIR="/var/www/fractal/parser-quicklook/"
    # Python interpreter from environment, if using virtualenv
    #ENV_PYTHON="/somewhere/.virtualenvs/MyProject/bin/python"
    # How to call "manage.py celeryd_multi"
    #CELERYD_MULTI="/usr/local/bin/celeryd-multi"
    # How to call "manage.py celeryctl"
    #CELERYCTL="/usr/local/bin/celeryctl"
    #CELERYBEAT="/usr/local/bin/celerybeat"
    # Extra arguments to celeryd
    CELERYD_OPTS="--time-limit=300 --concurrency=8  -Q BBC,BGR,FASTCOMPANY,Firstpost,Guardian,IBNLIVE,LIVEMINT,Mashable,NDTV,Pandodaily,Reuters,TNW,TheHindu,ZEENEWS "
    # Name of the celery config module, don't change this.
    CELERY_CONFIG_MODULE="celeryconfig"
    # %n will be replaced with the nodename.
    CELERYD_LOG_FILE="/var/log/celery/%n.log"
    CELERYD_PID_FILE="/var/run/celery/%n.pid"
    # Workers should run as an unprivileged user.
    #CELERYD_USER="nobody"
    #CELERYD_GROUP="nobody"
    # Set any other env vars here too!
    PROJET_ENV="PRODUCTION"
    # Name of the projects settings module.
    # in this case is just settings and not the full path because it will change the dir to
    # the project folder first.
    CELERY_CREATE_DIRS=1
    

    Celeryconfig is already provided in part1.

    Here is my proj directory structure.

    project
    |-- main.py
    |-- project
    |   |-- celeryconfig.py
    |   |-- __init__.py
    |-- tasks.py
    

    How can I demonize with the Queues? I have provided the queues in CELERYD_OPTS as well.

    Is there a way in which we can dynamically demonize the number of queues in the celery? For eg:- we have CELERY_CREATE_MISSING_QUEUES = True for creating the missing queues. Is there something similar to daemonize the celery queues?

    • cmidi
      cmidi almost 8 years
      did you update the rabbitmq server database after you brought it up again ? YOu need to add users vhost and set permissions before you can connect celery using the same user. sudo rabbitmqtcl add_user USERNAME PASSWORD sudo rabbitmqctl add_vhost VHOST_NAME sudo rabbitmqctl set_permissions -p VHOST_NAME USERNAME ".*" ".*" ".*"
    • cmidi
      cmidi almost 8 years
      Also how does your celery app configuration element BROKER_URL looks like ?
    • PythonEnthusiast
      PythonEnthusiast almost 8 years
      I'm not able to create a user. Whenever I try to create one, it throws up an error saying Error: unable to connect to node '[email protected]': nodedown. Upon looking at the sudo service rabbitmq-server status it shows the same error.
    • PythonEnthusiast
      PythonEnthusiast almost 8 years
      I restarted the celery-server and created the user and add the permissions. Then gave a restart to the rabbit-server. After doing all this, I checked the celery status and it still shows the same error. Connection refused.
    • cmidi
      cmidi almost 8 years
      I would suggest you to stop the rabbitmq service before you change the database sudo rabbitmqctl stop chage the database by the above commands and then run the service in the background sudo rabbitmq-server –detached
    • PythonEnthusiast
      PythonEnthusiast almost 8 years
      My BROKER_URL is BROKER_URL = 'amqp://'
    • cmidi
      cmidi almost 8 years
      How does your celery app configuration look like what is your BROKER_URL ? Your BORKER_URL seems to be wrong. Also check the status of rabbitmq service
    • PythonEnthusiast
      PythonEnthusiast almost 8 years
      rabbitmq status is fine. and my brokerurl setting in the config file is set as BROKER_URL = 'amqp://'
    • cmidi
      cmidi almost 8 years
      BROKER_URL='amqp://USERNAME:[email protected]:5672/VHOST_NAM‌​E', CELERY_RESULT_BACKEND='amqp', CELERY_DISABLE_RATE_LIMITS = True 5672 is the default port for rabbitmq
    • cmidi
      cmidi almost 8 years
      The following links have all the information celery.readthedocs.org/en/latest/getting-started/brokers/… transport://userid:[email protected]:port/virtual_host
    • PythonEnthusiast
      PythonEnthusiast almost 8 years
      is the HOSTNAME = localhost? Please confirm . Thanks
    • cmidi
      cmidi almost 8 years
      hostname is the hostname or ipaddress of the system running rabbitmq service, localhost should work, but I have not tested it myself
    • PythonEnthusiast
      PythonEnthusiast almost 8 years
      I set my broker url as per your instructions. But still the same error upon ` celery status`. connection refused
    • cmidi
      cmidi almost 8 years
      Make sure firewall is not blocking the connection, you can shut it off by sudo service iptables stop,make sure the rabbitmq process is running ps aux | grep rabbitmq and a listening socket is up lsof -p PIDmake sure the service is running sudo rabbitmqctl status sudo rabbitmqctl list_users sudo rabbitmqctl list_vhosts sudo rabbitmqctl list_permissions please look and provide the above command output and check the logs, wireshark capture can also point to the problem
    • itzMEonTV
      itzMEonTV almost 8 years
      How did you start your celery worker?
    • PythonEnthusiast
      PythonEnthusiast almost 8 years
      The main question is everything was working fine 2 hours before, and suddenly it has stop connecting.
    • PythonEnthusiast
      PythonEnthusiast almost 8 years
      I was finally able to fix it. sudo apt-get --purge remove raabitmq-server and sudo apt-get install raabitmq-server. This fixed it,.
    • PythonEnthusiast
      PythonEnthusiast almost 8 years
      Now how can I daemonize the celery with multiple queues (Part2 of the question). ?? Any idea
    • cmidi
      cmidi almost 8 years
      Celery has demonizing scrpits for workers that comes with the distribution celery.readthedocs.org/en/latest/tutorials/daemonizing.html
    • PythonEnthusiast
      PythonEnthusiast almost 8 years
      Now it's again not able to connect. Why I'm getting this error again and again?
    • Snigdha Batra
      Snigdha Batra over 7 years
      Looks like the number of connections exceeding limit problem. Have you set the broker pool limit correctly?