Deleting all pending tasks in celery / rabbitmq
Solution 1
From the docs:
$ celery -A proj purge
or
from proj.celery import app
app.control.purge()
(EDIT: Updated with current method.)
Solution 2
For celery 3.0+:
$ celery purge
To purge a specific queue:
$ celery -Q queue_name purge
Solution 3
For Celery 2.x and 3.x:
When using worker with -Q parameter to define queues, for example
celery worker -Q queue1,queue2,queue3
then celery purge
will not work, because you cannot pass the queue params to it. It will only delete the default queue.
The solution is to start your workers with --purge
parameter like this:
celery worker -Q queue1,queue2,queue3 --purge
This will however run the worker.
Other option is to use the amqp subcommand of celery
celery amqp queue.delete queue1
celery amqp queue.delete queue2
celery amqp queue.delete queue3
Solution 4
In Celery 3+:
CLI:
$ celery -A proj purge
Programatically:
>>> from proj.celery import app
>>> app.control.purge()
http://docs.celeryproject.org/en/latest/faq.html#how-do-i-purge-all-waiting-tasks
Solution 5
I found that celery purge
doesn't work for my more complex celery config. I use multiple named queues for different purposes:
$ sudo rabbitmqctl list_queues -p celery name messages consumers
Listing queues ... # Output sorted, whitespaced for readability
celery 0 2
[email protected] 0 1
[email protected] 0 1
apns 0 1
[email protected] 0 1
analytics 1 1
[email protected] 0 1
bcast.361093f1-de68-46c5-adff-d49ea8f164c0 0 1
bcast.a53632b0-c8b8-46d9-bd59-364afe9998c1 0 1
celeryev.c27b070d-b07e-4e37-9dca-dbb45d03fd54 0 1
celeryev.c66a9bed-84bd-40b0-8fe7-4e4d0c002866 0 1
celeryev.b490f71a-be1a-4cd8-ae17-06a713cc2a99 0 1
celeryev.9d023165-ab4a-42cb-86f8-90294b80bd1e 0 1
The first column is the queue name, the second is the number of messages waiting in the queue, and the third is the number of listeners for that queue. The queues are:
- celery - Queue for standard, idempotent celery tasks
- apns - Queue for Apple Push Notification Service tasks, not quite as idempotent
- analytics - Queue for long running nightly analytics
- *.pidbox - Queue for worker commands, such as shutdown and reset, one per worker (2 celery workers, one apns worker, one analytics worker)
- bcast.* - Broadcast queues, for sending messages to all workers listening to a queue (rather than just the first to grab it)
- celeryev.* - Celery event queues, for reporting task analytics
The analytics task is a brute force tasks that worked great on small data sets, but now takes more than 24 hours to process. Occasionally, something will go wrong and it will get stuck waiting on the database. It needs to be re-written, but until then, when it gets stuck I kill the task, empty the queue, and try again. I detect "stuckness" by looking at the message count for the analytics queue, which should be 0 (finished analytics) or 1 (waiting for last night's analytics to finish). 2 or higher is bad, and I get an email.
celery purge
offers to erase tasks from one of the broadcast queues, and I don't see an option to pick a different named queue.
Here's my process:
$ sudo /etc/init.d/celeryd stop # Wait for analytics task to be last one, Ctrl-C
$ ps -ef | grep analytics # Get the PID of the worker, not the root PID reported by celery
$ sudo kill <PID>
$ sudo /etc/init.d/celeryd stop # Confim dead
$ python manage.py celery amqp queue.purge analytics
$ sudo rabbitmqctl list_queues -p celery name messages consumers # Confirm messages is 0
$ sudo /etc/init.d/celeryd start
Related videos on Youtube
nabizan
Updated on February 05, 2022Comments
-
nabizan over 2 years
How can I delete all pending tasks without knowing the
task_id
for each task? -
Jonathan Geisler about 11 yearsOr, from Django, for celery 3.0+:
manage.py celery purge
(celeryctl
is now deprecated and will be gone in 3.1). -
Melignus almost 11 yearsI found this answer looking for how to do this with a redis backend. Best method I found was
redis-cli KEYS "celery*" | xargs redis-cli DEL
which worked for me. This will wipe out all tasks stored on the redis backend you're using. -
luistm over 10 yearsHow can i do this in celery 3.0 ?
-
Erve1879 almost 10 yearsFor me, it was simply
celery purge
(inside the relevant virtual env). Ooops - there's an answer with the same below..... stackoverflow.com/a/20404976/1213425 -
Armen Michaeli over 9 yearsNot an answer though, is it? Very informative however!
-
jwhitlock over 9 years
celeryctl purge
didn't work with named queues.python manage.py celery amqp queue.purge <queue_name>
did. I think the context is useful for those with complex setups, so they can figure out what they need to do ifceleryctl purge
fails for them. -
Armen Michaeli over 9 yearsI cannot find
manage.py
in my Celery 3.1.17, has the file been removed or just spanking new? I found what looks like the corresponding interface (queue.purge
) in*/bin/amqp.py
, however. But after trying to correlate the contents of the file with the documentation, I must regrettably admit that Celery is woefully undocumented and also a very convoluted piece of work, at least judging it by its source code. -
jwhitlock over 9 years
manage.py
is the Django management script, andmanage.py celery
runs celery after loading configuration from Django settings. I haven't used celery outside of Django, but the includedcelery
command may be what you are looking for: celery.readthedocs.org/en/latest/userguide/monitoring.html -
Kamil Sindi about 8 yearsIf you get connection errors, make sure you specify the app, e.g.
celery -A proj purge
. -
gitaarik over 7 yearsFor Celery 4.0+ in combination with Django it's again this command, where the argument to
-A
is the Django app where thecelery.py
is located. -
smido almost 7 yearsYes, this is for older (2.x and maybe 3.x) versions of celery. I cannot edit the answer
-
JasonGenX almost 3 yearsthis doesn't work on scheduled task. after such a
purge
you can still see them scheduled and they WILL run when their. time comes (you can see them withinspect scheduled
) -
mlissner over 2 yearsNo idea why, but this just seems to endlessly hang for me. To deal with it, I used
redis-cli --bigkeys
to find the biggest keys, which happened to be my queue names. IDEL
'ed those keys in redis, and things seem to be OK. This is deletes your whole queue, but I was OK doing that. -
Thorvald over 2 yearsI believe the -Q flag has been deprecated (didn't work for me, "no such option"), to delete a specific queue on Celery 5.0.5 you'd run celery -A appname purge --queues queuename