Redis : Is there a limit to the number of keys that i can store?

21,455

Am I missing something?

Yes. Redis is a pure in-memory store, with persistency options. Everything must fit in memory.

Does this mean that redis does not swap out to the disk once the maxmemory is reached.

Precisely.

Is it trying to hold all the key values in memory?

Keys and values, yes.

Does it mean that I have to incrementally increase the max memory limit according to the increase in the number of key-value pairs that I might have to store?

You need to decide upfront how much memory you allocate to Redis, yes.

If you are memory constrained, you will be better served with a disk-based store.

Share:
21,455

Related videos on Youtube

Arun Jolly
Author by

Arun Jolly

The Ctrl + Alt + Delete guy

Updated on July 12, 2020

Comments

  • Arun Jolly
    Arun Jolly almost 4 years

    First the context, im trying to use Redis as an in-memory store backed with persistence. I need to store large number of objects (millions) in a Redis Hash.

    At the same time, i don't want my redis instance to consume too much memory. So i've set the maxmemory property in redis.conf to 100mb. I've set maxmemory-policy as allkeys-random The persistece mode is AOF and fysnc is every second.

    Now the issue iam facing is, every time i try to store more than two hundred thousand ojects in the hash, the hash gets reset (ie all the existing key values in the hash vanishes ). I confirm this by using the hlen command on the hash in the redis-cli.

    Find below the object im trying to store

    public class Employee implements Serializable {
    
    private static final long serialVersionUID = 1L;
    int id;
    String name;
    String department;
    String address;
    
        /* Getters and Setters */
    
        /* Hashcode - Generates hashcode (key) for each object */
    public int hashCode() {
        final int prime = 31;
        int result = 1;
        result = prime * result + ((address == null) ? 0 : address.hashCode());
        result = prime * result + ((department == null) ? 0 : department.hashCode());
        result = prime * result + id;
        result = prime * result + ((name == null) ? 0 : name.hashCode());
        return result;
    }
    }
    

    Also, Find below the code that stores into redis (Im using Jedis to interact with Redis )

        JedisPool jedisPool = new JedisPool(new JedisPoolConfig(), "localhost");
        Jedis jedis = (Jedis) jedisPool.getResource();
    
        System.out.println("Starting....");
        for(int i=0;i<1000000;i++) {
    
                 /* Converting object to byte array */
            Employee employee = new Employee(i, "Arun Jolly", "IT", "SomeCompany");
            ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
            ObjectOutputStream objectOutputStream = new ObjectOutputStream(byteArrayOutputStream);
            objectOutputStream.writeObject(employee);
            byte[] value = byteArrayOutputStream.toByteArray();
    
            /* Creating key in byte array format using hashCode() */
            ByteBuffer buffer = ByteBuffer.allocate(128);
            buffer.putInt(employee.hashCode());
            byte[] field = buffer.array();
    
                /* Specyfying the Redis Hash in byte array format */ 
            String tableName = "Employee_Details";
            byte[] key = tableName.getBytes();
    
            jedis.hset(key, field, value);
            System.out.println("Stored Employee "+i);
        }
    

    Am i missing something ?

    Does this mean that redis does not swap out to the disk once the maxmemory is reached ( Is it trying to hold all the key values in memory ? ) Does it mean that i have to incrementally increase the maxmemory limit according to the increase in the number of key-value pairs that i might have to store ?

  • Arun Jolly
    Arun Jolly almost 11 years
    So this is a scalability issue with redis right ? Does this mean that redis is not intended to be used as a primary data store ? ( Sorry to be asking the same question in a different way...but i need to be absolutely certain before i take a decision on this ). Can you suggest another nosql db which doesn't have these limitations ? (something that can be used as a primary datastore
  • Didier Spezia
    Didier Spezia almost 11 years
    It is not a scalability issue, it is a deliberate choice to support only in-memory databases, targeting optimal performance. Disk I/Os are too slow. Redis can be used as a primary data store, provided you have enough memory. If your volume of data does not fit in memory, you may want to have a look at solutions like MongoDB or Couchbase. But do not expect the same kind of raw performance ...
  • Ashwani Agarwal
    Ashwani Agarwal about 5 years
    Is this answer still valid?
  • user1694845
    user1694845 over 3 years
    I wouldn't use Redis has primary datastore unless you can tolerate 100% data loss. There's incremental backups available to restore prior state, but you will still experience some data loss.

Related