Redis : Is there a limit to the number of keys that i can store?

21,455

Am I missing something?

Yes. Redis is a pure in-memory store, with persistency options. Everything must fit in memory.

Does this mean that redis does not swap out to the disk once the maxmemory is reached.

Precisely.

Is it trying to hold all the key values in memory?

Keys and values, yes.

Does it mean that I have to incrementally increase the max memory limit according to the increase in the number of key-value pairs that I might have to store?

You need to decide upfront how much memory you allocate to Redis, yes.

If you are memory constrained, you will be better served with a disk-based store.

Share:
21,455

Related videos on Youtube

Arun Jolly
Author by

Arun Jolly

The Ctrl + Alt + Delete guy

Updated on June 04, 2022

Comments

  • Arun Jolly
    Arun Jolly 20 days

    First the context, im trying to use Redis as an in-memory store backed with persistence. I need to store large number of objects (millions) in a Redis Hash.

    At the same time, i don't want my redis instance to consume too much memory. So i've set the maxmemory property in redis.conf to 100mb. I've set maxmemory-policy as allkeys-random The persistece mode is AOF and fysnc is every second.

    Now the issue iam facing is, every time i try to store more than two hundred thousand ojects in the hash, the hash gets reset (ie all the existing key values in the hash vanishes ). I confirm this by using the hlen command on the hash in the redis-cli.

    Find below the object im trying to store

    public class Employee implements Serializable {
    
    private static final long serialVersionUID = 1L;
    int id;
    String name;
    String department;
    String address;
    
        /* Getters and Setters */
    
        /* Hashcode - Generates hashcode (key) for each object */
    public int hashCode() {
        final int prime = 31;
        int result = 1;
        result = prime * result + ((address == null) ? 0 : address.hashCode());
        result = prime * result + ((department == null) ? 0 : department.hashCode());
        result = prime * result + id;
        result = prime * result + ((name == null) ? 0 : name.hashCode());
        return result;
    }
    }
    

    Also, Find below the code that stores into redis (Im using Jedis to interact with Redis )

        JedisPool jedisPool = new JedisPool(new JedisPoolConfig(), "localhost");
        Jedis jedis = (Jedis) jedisPool.getResource();
    
        System.out.println("Starting....");
        for(int i=0;i<1000000;i++) {
    
                 /* Converting object to byte array */
            Employee employee = new Employee(i, "Arun Jolly", "IT", "SomeCompany");
            ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
            ObjectOutputStream objectOutputStream = new ObjectOutputStream(byteArrayOutputStream);
            objectOutputStream.writeObject(employee);
            byte[] value = byteArrayOutputStream.toByteArray();
    
            /* Creating key in byte array format using hashCode() */
            ByteBuffer buffer = ByteBuffer.allocate(128);
            buffer.putInt(employee.hashCode());
            byte[] field = buffer.array();
    
                /* Specyfying the Redis Hash in byte array format */ 
            String tableName = "Employee_Details";
            byte[] key = tableName.getBytes();
    
            jedis.hset(key, field, value);
            System.out.println("Stored Employee "+i);
        }
    

    Am i missing something ?

    Does this mean that redis does not swap out to the disk once the maxmemory is reached ( Is it trying to hold all the key values in memory ? ) Does it mean that i have to incrementally increase the maxmemory limit according to the increase in the number of key-value pairs that i might have to store ?