Can Redis write out to a database like PostgreSQL?

22,673

Solution 1

Redis is increasingly used as a caching layer, much like a more sophisticated memcached, and is very useful in this role. You usually use Redis as a write-through cache for data you want to be durable, and write-back for data you might want to accumulate then batch write (where you can afford to lose recent data).

PostgreSQL's LISTEN and NOTIFY system is very useful for doing selective cache invalidation, letting you purge records from Redis when they're updated in PostgreSQL.

For combining it with PostgreSQL, you will find the Redis foreign data wrapper provider that Andrew Dunstain and Dave Page are working on very interesting.

I'm not aware of any tool that makes Redis into a transparent write-back cache for PostgreSQL. Their data models are probably too different for this to work well. Usually you write changes to PostgreSQL and invalidate their Redis cache entries using listen/notify to a cache manager worker, or you queue changes in Redis then have your app read them out and write them into Pg in chunks.

Solution 2

Redis is persistent if configured to be so, both through snapshots and a kind of WAL called AOF. Loads of people use it as a primary datastore. https://redis.io/topics/persistence

If one is referring to the greater world of Redis compatible (resp protocol) datastores, many are not limited to in-memory storage: https://keydb.dev/ http://ssdb.io/ and many more...


Share:
22,673

Related videos on Youtube

deadlock
Author by

deadlock

I specialize in Django & PostgreSQL. I'm self taught and most of my knowledge comes from places like Stackoverflow and Serverfault.

Updated on November 11, 2020

Comments

  • deadlock
    deadlock over 3 years

    I've been using PostgreSQL for the longest time. All of my data lives inside Postgres. I've recently looked into redis and it has a lot of powerful features that would otherwise take a couple of lines in Django (python) to do. Redis data is persistent as long the machine it's running on doesn't go down and you can configure it to write out the data it's storing to disk every 1000 keys or every 5 minutes or so depending on your choice.

    Redis would make a great cache and it would certainly replace a lot of functions I have written in python (up voting a user's post, viewing their friends list etc...). But my concern is, all of this data would some how need to be translated over to postgres. I don't trust storing this data in redis. I see redis as a temporary storage solution for quick retrieval of information. It's extremely fast and this far outweighs doing repetitive queries against postgres.

    I'm assuming the only way I could technically write the redis data to the database is to save() whatever I get from the 'get' query from redis to the postgres database through Django.

    That's the only solution I could think of. Do you know of any other solutions to this problem?

    • deadlock
      deadlock almost 11 years
      I've found a similar question here: stackoverflow.com/questions/16234221/… . I believe the op is asking something similar, however the answers are not what I am looking for.
    • akonsu
      akonsu almost 11 years
      We have a web server that implements an API, and we use Redis as a cache. When a POST with new data comes in, we store the data in Redis, and inform a background process of these new data, and the process pushes them to the (MySql) database (we use a Redis list to push data to the process). To read data, we check Redis first, and if data are not there, we get them from the DB, put to Redis, and return to the client.
    • Admin
      Admin about 5 years
      @akonsu it's been quite a few years, but what kind of performance did you get out of that strategy?
  • Asif
    Asif almost 5 years
    I am working on an application which has an API method m(). m() is supposed to read strings from a table T, create a random string and save it to T at the end. 1000 of threads/users may be calling m(). So there will be multiple selects and multiple inserts. I can't just cache the strings because they will be invalidated so frequently anyway. I was also planning on writing a write-back redis solution such that both reads and writes happen to the cache (which is loaded once a day) and MySQL is updated say every hour or 1000 records. Is above still the only way to do this?
  • Craig Ringer
    Craig Ringer almost 5 years
    @Mustafa Post a new question please, and include a link to this one in your new question.
  • user1034912
    user1034912 about 3 years
    Wouldn't configuring Redis to be persistent slows it down significantly? Might as well use a RDBMS like Postgres
  • vectorselector
    vectorselector about 3 years
    Dear user1034912, the answer to your baseless speculation is, no, it does not slow Redis down significantly. You will note that I also linked 2 redis-clones, one is which is designed to be a persistent primary store. A key-value store is a key-value story, not an RDBMS, and vice-versa. Do not blindly assume that all storage needs map to an RDBMS, nor that ACID is the only acronym or kind of service guarantee in the world. Use Postgres if you want Postgres, forget fuzzy thinking like "might as well" if your data, in fact, maps to key-value hashes.
  • vectorselector
    vectorselector about 3 years
    Additionally, bear in mind that normalizing need not stop at the tabular granularity. Libraries like Ohm for Redis prove that even complex nested data structures can be represented by key-value stores. I am not denying the widespread usage of an RDBMS nor am I suggesting that people switch to mapping their data to key-value reductions. I wish to point out that knee-jerk RDBMS zealotry is not a logically defensible position, but rather an emotionally extreme position.
  • vectorselector
    vectorselector about 3 years
    Additionally, even as a cache or temporary storage, there are times in which a WAL and being able to recreate the data from a list is vital. Example: a crashed server and user-sessions that you want to quickly inflate on your HA secondary