HSET vs SET memory usage?

12,923

Small hash objects are encoded as ziplists depending on the values of hash-max-ziplist-entries and hash-max-ziplist-value parameters. This is simple data serialization.

A ziplist is defined as follows (extracted from Redis source code):

/* The ziplist is a specially encoded dually linked list that is designed
* to be very memory efficient. It stores both strings and integer values,
* where integers are encoded as actual integers instead of a series of
* characters. It allows push and pop operations on either side of the list
* in O(1) time. However, because every operation requires a reallocation of
* the memory used by the ziplist, the actual complexity is related to the
* amount of memory used by the ziplist.
*
* ----------------------------------------------------------------------------
*
* ZIPLIST OVERALL LAYOUT:
* The general layout of the ziplist is as follows:
* <zlbytes><zltail><zllen><entry><entry><zlend>
*
* <zlbytes> is an unsigned integer to hold the number of bytes that the
* ziplist occupies. This value needs to be stored to be able to resize the
* entire structure without the need to traverse it first.
*
* <zltail> is the offset to the last entry in the list. This allows a pop
* operation on the far side of the list without the need for full traversal.
*
* <zllen> is the number of entries.When this value is larger than 2**16-2,
* we need to traverse the entire list to know how many items it holds.
*
* <zlend> is a single byte special value, equal to 255, which indicates the
* end of the list.
*
* ZIPLIST ENTRIES:
* Every entry in the ziplist is prefixed by a header that contains two pieces
* of information. First, the length of the previous entry is stored to be
* able to traverse the list from back to front. Second, the encoding with an
* optional string length of the entry itself is stored.
*
* The length of the previous entry is encoded in the following way:
* If this length is smaller than 254 bytes, it will only consume a single
* byte that takes the length as value. When the length is greater than or
* equal to 254, it will consume 5 bytes. The first byte is set to 254 to
* indicate a larger value is following. The remaining 4 bytes take the
* length of the previous entry as value.
*
* The other header field of the entry itself depends on the contents of the
* entry. When the entry is a string, the first 2 bits of this header will hold
* the type of encoding used to store the length of the string, followed by the
* actual length of the string. When the entry is an integer the first 2 bits
* are both set to 1. The following 2 bits are used to specify what kind of
* integer will be stored after this header. An overview of the different
* types and encodings is as follows:
*
* |00pppppp| - 1 byte
*      String value with length less than or equal to 63 bytes (6 bits).
* |01pppppp|qqqqqqqq| - 2 bytes
*      String value with length less than or equal to 16383 bytes (14 bits).
* |10______|qqqqqqqq|rrrrrrrr|ssssssss|tttttttt| - 5 bytes
*      String value with length greater than or equal to 16384 bytes.
* |11000000| - 1 byte
*      Integer encoded as int16_t (2 bytes).
* |11010000| - 1 byte
*      Integer encoded as int32_t (4 bytes).
* |11100000| - 1 byte
*      Integer encoded as int64_t (8 bytes).
* |11110000| - 1 byte
*      Integer encoded as 24 bit signed (3 bytes).
* |11111110| - 1 byte
*      Integer encoded as 8 bit signed (1 byte).
* |1111xxxx| - (with xxxx between 0000 and 1101) immediate 4 bit integer.
*      Unsigned integer from 0 to 12. The encoded value is actually from
*      1 to 13 because 0000 and 1111 can not be used, so 1 should be
*      subtracted from the encoded 4 bit value to obtain the right value.
* |11111111| - End of ziplist.
*
* All the integers are represented in little endian byte order.
*/

Each item from the hash object is represented as a key/value couple in the ziplist (2 successive entries). Both key and values can be stored as a simple string, or integer. This format is more compact in memory because it saves a lot of pointers (8 bytes each) that are required to implement a dynamic data structure (like a real hashtable).

The downside is HSET/HGET operations are actually O(N) when applied on ziplist. That's why the ziplist must be kept small. When the ziplist data fit in the L1 CPU cache, the corresponding algorithms are fast enough despite of their linear complexity.

You may want to refer to the following links for more information:

Redis 10x more memory usage than data

Redis Data Structure Space Requirements

These answers refer to other data structures (like sets, list, or sorted sets), but it is exactly the same concept.

Share:
12,923

Related videos on Youtube

Sripathi Krishnan
Author by

Sripathi Krishnan

CTO at HashedIn Technologies GitHub @srithedabbler LinkedIn

Updated on September 16, 2022

Comments

  • Sripathi Krishnan
    Sripathi Krishnan over 1 year

    I was reading this article which mentions storing 1Million keys in redis will use 17GB of memory. However when switching to hashes chunking them at 1k each (ex: HSET "mediabucket:1155" "1155315" "939") allows them to store 1M in 5GB which is a pretty large savings.

    I read redis memory-optimization but I don't quite understand the difference. It says HGETs are not quite O(1) but close enough and mentions more cpu usage when using hsets. I don't understand why there would be more cpu usage (sure trading time for space. But how/what?). It mentions 'encoding' but not how they encode it.

    It also mentions only string but I have no idea what only string means. Is it the hash field? Does it mean the hash field? I don't see anything about it in HSET. What exactly would be encoded and why would the encoding be more efficient then using a SET?

    How is it possible HSET "mediabucket:1155" "1155315" "939" is more efficient then SET "mediabucket:1155315" "939"? There less data in SET (1155315 an d 1155 is used rather then 1155315). I personally would try using binary keys however I don't think that has to do with why HSETs are more efficient.

    EDIT :

    Cross posted on redis-db mailing list as well : https://groups.google.com/d/topic/redis-db/90K3UqciAx0/discussion