Redis Python - how to delete all keys according to a specific pattern In python, without python iterating
Solution 1
I think the
for key in x: cache.delete(key)
is pretty good and concise. delete
really wants one key at a time, so you have to loop.
Otherwise, this previous question and answer points you to a lua-based solution.
Solution 2
Use SCAN iterators: https://pypi.python.org/pypi/redis
for key in r.scan_iter("prefix:*"):
r.delete(key)
Solution 3
Here is a full working example using py-redis:
from redis import StrictRedis
cache = StrictRedis()
def clear_ns(ns):
"""
Clears a namespace
:param ns: str, namespace i.e your:prefix
:return: int, cleared keys
"""
count = 0
ns_keys = ns + '*'
for key in cache.scan_iter(ns_keys):
cache.delete(key)
count += 1
return count
You can also do scan_iter
to get all the keys into memory, and then pass all the keys to delete
for a bulk delete but may take a good chunk of memory for larger namespaces. So probably best to run a delete
for each key.
Cheers!
UPDATE:
Since writing the answer, I started using pipelining feature of redis to send all commands in one request and avoid network latency:
from redis import StrictRedis
cache = StrictRedis()
def clear_cache_ns(ns):
"""
Clears a namespace in redis cache.
This may be very time consuming.
:param ns: str, namespace i.e your:prefix*
:return: int, num cleared keys
"""
count = 0
pipe = cache.pipeline()
for key in cache.scan_iter(ns):
pipe.delete(key)
count += 1
pipe.execute()
return count
UPDATE2 (Best Performing):
If you use scan
instead of scan_iter
, you can control the chunk size and iterate through the cursor using your own logic. This also seems to be a lot faster, especially when dealing with many keys. If you add pipelining to this you will get a bit of a performance boost, 10-25% depending on chunk size, at the cost of memory usage since you will not send the execute command to Redis until everything is generated. So I stuck with scan:
from redis import StrictRedis
cache = StrictRedis()
CHUNK_SIZE = 5000
def clear_ns(ns):
"""
Clears a namespace
:param ns: str, namespace i.e your:prefix
:return: int, cleared keys
"""
cursor = '0'
ns_keys = ns + '*'
while cursor != 0:
cursor, keys = cache.scan(cursor=cursor, match=ns_keys, count=CHUNK_SIZE)
if keys:
cache.delete(*keys)
return True
Here are some benchmarks:
5k chunks using a busy Redis cluster:
Done removing using scan in 4.49929285049
Done removing using scan_iter in 98.4856731892
Done removing using scan_iter & pipe in 66.8833789825
Done removing using scan & pipe in 3.20298910141
5k chunks and a small idle dev redis (localhost):
Done removing using scan in 1.26654982567
Done removing using scan_iter in 13.5976779461
Done removing using scan_iter & pipe in 4.66061878204
Done removing using scan & pipe in 1.13942599297
Solution 4
From the Documentation
delete(*names) Delete one or more keys specified by names
This just wants an argument per key to delete and then it will tell you how many of them were found and deleted.
In the case of your code above I believe you can just do:
redis.delete(*x)
But I will admit I am new to python and I just do:
deleted_count = redis.delete('key1', 'key2')
Solution 5
cache.delete(*keys)
solution of Dirk works fine, but make sure keys isn't empty to avoid a redis.exceptions.ResponseError: wrong number of arguments for 'del' command
.
If you are sure that you will always get a result: cache.delete(*cache.keys('prefix:*') )
alonisser
Enthusiastic about opensource and free software. pythonista and webdev. trying to make a difference.
Updated on July 09, 2022Comments
-
alonisser almost 2 years
I'm writing a django management command to handle some of our redis caching. Basically, I need to choose all keys, that confirm to a certain pattern (for example: "prefix:*") and delete them.
I know I can use the cli to do that:
redis-cli KEYS "prefix:*" | xargs redis-cli DEL
But I need to do this from within the app. So I need to use the python binding (I'm using py-redis). I have tried feeding a list into delete, but it fails:
from common.redis_client import get_redis_client cache = get_redis_client() x = cache.keys('prefix:*') x == ['prefix:key1','prefix:key2'] # True
# And now
cache.delete(x)
# returns 0 . nothing is deleted
I know I can iterate over x:
for key in x: cache.delete(key)
But that would be losing redis awesome speed and misusing its capabilities. Is there a pythonic solution with py-redis, without iteration and/or the cli?
Thanks!
-
selfnamed almost 10 yearsUsing redis-python package you can do that such way:
cache.delete(*keys)
-
radtek about 7 yearsDon't use
cache.keys()
in prod, its intended for debugging: redis.io/commands/keys -
radtek about 7 yearsThis question pertains to redis, so django cache framework is out of scope.
-
radtek almost 7 yearsI provided a full working example. Also would like others to comment on scan_iter vs bulk delete
-
Robert Lujo over 6 yearsdjango-redis implements delete_pattern which does something very similar to this, see github.com/niwinz/django-redis/blob/master/django_redis/client/… .
-
Blackeagle52 about 5 yearsGreat answer, this should be the correct answer. Today I actually needed this answer myself and am preferring your answer above mine. Although some minor errors in your example, like no ns_keys variable in your first update, and :: within your second update.
-
radtek about 5 yearsThanks, but I actually don't use scanning in prod because its so slow, instead I end up caching every key in the namespace and doing a bulk delete that way. Seems like overkill I know but the performance is best because you don't have to scan cache at all.
-
Joshua Davies over 4 yearsFYI, there's a syntax error on line 13 of your third example; you have two colons (:) at the end of your while condition.
-
mirekphd almost 4 yearsUPDATE2 is priceless in production (default
chunk_size
would be order of magnitude slower!): chunk_size:execute_time: 5:1m 44.8s 50:17.0s 500:7.3s 5000:Timeout reading from socket -
mirekphd almost 4 yearsBe very careful when using high values of
chunk_size
in PRD, increase size slowly, starting from 1, or else you can easily cause cache read timeouts! Chunk size has direct impact onredis-server
CPU utilization (e.g. reaching 90% withchunk_size
of 170 for a 30+ gig database with millions of keys, wherescan
+delete
takes about 4 minutes for this maximum safe chunk). -
radtek almost 4 yearsIt all depends on your instance size, fine tuning will be required, 5k chunk size was perfect for me.
-
Jivan about 3 yearsThe typing between
cursor = '0'
andwhile cursor != 0
is awkward. You could use acursor = None
andcache.scan(cursor=cursor or 0, ...)
to make it slightly better -
Kyle Barron over 2 yearsNote that even when this answer was written in 2017, redis-py has allowed you to provide a chunksize to
scan_iter
. You don't have to manage the cursor yourself. (github.com/andymccurdy/redis-py/blob/…)