HashMap implementation in Java. How does the bucket index calculation work?

30,573

Solution 1

It's not calculating the hash, it's calculating the bucket.

The expression h & (length-1) does a bit-wise AND on h using length-1, which is like a bit-mask, to return only the low-order bits of h, thereby making for a super-fast variant of h % length.

Solution 2

The hash itself is calculated by the hashCode() method of the object you're trying to store.

What you see here is calculating the "bucket" to store the object based on the hash h. Ideally, to evade collisions, you would have the same number of buckets as is the maximum achievable value of h - but that could be too memory demanding. Therefore, you usually have a lower number of buckets with a danger of collisions.

If h is, say, 1000, but you only have 512 buckets in your underlying array, you need to know where to put the object. Usually, a mod operation on h would be enough, but that's too slow. Given the internal property of HashMap that the underlying array always has number of buckets equal to 2^n, the Sun's engineers could use the idea of h & (length-1), it does a bitwise AND with a number consisting of all 1's, practically reading only the n lowest bits of the hash (which is the same as doing h mod 2^n, only much faster).

Example:

     hash h: 11 1110 1000  -- (1000 in decimal)
   length l: 10 0000 0000  -- ( 512 in decimal)
      (l-1): 01 1111 1111  -- ( 511 in decimal - it will always be all ONEs)
h AND (l-1): 01 1110 1000  -- ( 488 in decimal which is a result of 1000 mod 512)

Solution 3

The above answer is very good but I want to explain more why Java can use indexFor for create index

Example, I have a HashMap like this (this test is on Java7, I see Java8 change HashMap a lot but I think this logic still very good)

// Default length of "budget" (table.length) after create is 16 (HashMap#DEFAULT_INITIAL_CAPACITY)
HashMap<String, Integer> hashMap = new HashMap<>();
hashMap.put("A",1); // hash("A")=69, indexFor(hash,table.length)=69&(16-1) = 5
hashMap.put("B",2); // hash("B")=70, indexFor(hash,table.length)=70&(16-1) = 6
hashMap.put("P",3); // hash("P")=85, indexFor(hash,table.length)=85&(16-1) = 5
hashMap.put("A",4); // hash("A")=69, indexFor(hash,table.length)=69&(16-1) = 5
hashMap.put("r", 4);// hash("r")=117, indexFor(hash,table.length)=117&(16-1) = 5

You can see the index of entry with key "A" and object with key "P" and object with key "r" have same index (= 5). And here is the debug result after I execute code above

enter image description here

Table in the image is here

public class HashMap<K, V> extends AbstractMap<K, V> implements Map<K, V>, Cloneable, Serializable {
    transient HashMap.Entry<K, V>[] table;
    ...
}

=> I see
If index are different, new entry will add to table
If index is same and hash is same, new value will update
If index is same and hash is different, new entry will point to old entry (like a LinkedList). Then you know why Map.Entry have field next

static class Entry<K, V> implements java.util.Map.Entry<K, V> {
        ...
        HashMap.Entry<K, V> next;
}

You can verify it again by read the code in HashMap.

As now, you can think that HashMap will never need to change the size (16) because indexFor() always return value <= 15 but it not correct.
If you look at HashMap code

 if (this.size >= this.threshold ...) {
      this.resize(2 * this.table.length);

HashMap will resize table (double table length) when size >= threadhold

What is threadhold? threadhold is calculated below

static final int DEFAULT_INITIAL_CAPACITY = 16;
static final float DEFAULT_LOAD_FACTOR = 0.75F;
...
this.threshold = (int)Math.min((float)capacity * this.loadFactor, 1.07374182E9F); // if capacity(table.length) = 16 => threadhold = 12

What is the size? size is calculated below.
Of course, size here is not table.length .
Any time you put new entry to HashMap and HashMap need to create new entry (note that HashMap don't create new entry when the key is same, it just override new value for existed entry) then size++

void createEntry(int hash, K key, V value, int bucketIndex) {
    ...
    ++this.size;
}

Hope it help

Solution 4

It is calculating the bucket of the hash map where the entry (key-value pair) will be stored. The bucket id is hashvalue/buckets length.

A hash map consists of buckets; objects will be placed in these buckets based on the bucket id.

Any number of objects can actually fall into the same bucket based on their hash code / buckets length value. This is called a 'collision'.

If many objects fall into the same bucket, while searching their equals() method will be called to disambiguate.

The number of collisions is indirectly proportional to the bucket's length.

Share:
30,573
gnreddy
Author by

gnreddy

Updated on July 28, 2022

Comments

  • gnreddy
    gnreddy almost 2 years

    I am looking at the implementation of HashMap in Java and am stuck at one point.
    How is the indexFor function calculated?

    static int indexFor(int h, int length) {
       return h & (length-1);
    }
    

    Thanks

  • gnreddy
    gnreddy about 12 years
    Can you please explain this calculation here ?
  • Petr Janeček
    Petr Janeček about 12 years
    Does it make sense now, or should I elaborate more on the internals?
  • Louis Wasserman
    Louis Wasserman about 12 years
    Very well explained. I'm impressed.
  • LarsH
    LarsH almost 12 years
    Does this assume that length is a power of 2?
  • Bohemian
    Bohemian almost 12 years
    @LarsH Well, it would be far better if it was a power of 2, then you'd get a clean chop off of the high-order bits. As it happens, the implementation of HashMap starts with size 16 and does indeed multiply by two when resizing. It would still work if not a power of two, but you would want as many bits "on" as possible for length -1 to balance the spread between buckets
  • Paul Boddington
    Paul Boddington about 9 years
    Does this mean that a hash bucket could contain keys with different hashCodes if the lowest 9 or so bits agree, but the higher bits are different?
  • Petr Janeček
    Petr Janeček about 9 years
    @pbabcdefp Yes, that's correct. That's one of the main reasons why a good hash function multiplies it's result by a prime number - it tries to achieve very different hashes even when the values differ by a little.
  • Paul Boddington
    Paul Boddington about 9 years
    @Slanec Thank you. This is one of the clearest answers I've seen on SO. (+1).
  • Max Gabderakhmanov
    Max Gabderakhmanov about 5 years
    The best explanation of the bucket calculation on the web. Thanks a lot!
  • LexSav
    LexSav almost 4 years
    So does it mean that in HashMap there is always 16 buckets (initial capacity)? Because if it is h % length, then no more buckets can be created.
  • Bohemian
    Bohemian almost 4 years
    @lex I don’t understand your question, but the capacity (the quantity of buckets), which is stored in length, doubles if the buckets are “too full”. The algorithm relies on length always being a power of 2. 16 was chosen as using the Goldielocks principle (not to big and not too small)
  • KJ Sudarshan
    KJ Sudarshan about 3 years
    @PetrJaneček would mind explaining this? If L != 2^n? then L-1 will not have all the bits set to 1 right? Say L = 511 and not 512, then?
  • Petr Janeček
    Petr Janeček about 3 years
    @KJ Sudarshan Exactly correct. And in that case the trick with & wouldn't work, but you could still use a normal modulo, %. The binary trickery is simply a performance trick.
  • uncle bob
    uncle bob almost 3 years
    @Bohemian When i try this indexFor(18,7) it return 2 but 18 % 7 return 4, why would you say it's a super-fast variant of h % length
  • Bohemian
    Bohemian almost 3 years
    @unclebob as per my previous comment, the algorithm relies on length being a power of 2 (which is always the case for HashMap). 7 is not a power of 2.
  • Humpity
    Humpity over 2 years
    I've always thought that each bucket held items with the same hashcode. There are a lot of articles out there that say this. But what you are saying here is that the hashcode gets a furthur map into buckets with mixed hashcodes. This makes more sense storagewise.