Calculating a hash code for a large file in parallel

13,200

Solution 1

Block-based hashing (your method 2) is a well known technique that is used in practice:

Just like what you're doing, these methods takes the list of block hashes and hashes that again, down to a single short hash. Since this is a well established practice, I would assume that it is as secure as sequential hashing.

Solution 2

Some modern hash designs allow them to be run in parallel. See An Efficient Parallel Algorithm for Skein Hash Functions. If you are willing to use a new (and hence less thoroughly tested) hash algorithm, this may give you the speed increase you want on a multi-processor machine.

Skein has reached the final stage of the NIST SHA-3 competition so it is not completely untested.

Share:
13,200
Michael Goldshteyn
Author by

Michael Goldshteyn

I compute, therefore I am. If this does not compute, you are ∅ inside. Any programming code, guidance, or explanation that I personally contributed to StackOverflow is covered by the least restrictive license of them all - the WTFPL. For a TL;DR explanation, see this humorous comic strip.

Updated on June 14, 2022

Comments

  • Michael Goldshteyn
    Michael Goldshteyn almost 2 years

    I would like to improve the performance of hashing large files, say for example in the tens of gigabytes in size.

    Normally, you sequentially hash the bytes of the files using a hash function (say, for example SHA-256, although I will most likely use Skein, so hashing will be slower when compared to the time it takes to read the file from a [fast] SSD). Let's call this Method 1.

    The idea is to hash multiple 1 MB blocks of the file in parallel on 8 CPUs and then hash the concatenated hashes into a single final hash. Let's call this Method 2.

    A picture depicting this method follows:


    enter image description here


    I would like to know if this idea is sound and how much "security" is lost (in terms of collisions being more probable) vs doing a single hash over the span of the entire file.

    For example:

    Let's use the SHA-256 variant of SHA-2 and set the file size to 2^34=34,359,738,368 bytes. Therefore, using a simple single pass (Method 1), I would get a 256-bit hash for the entire file.

    Compare this with:

    Using the parallel hashing (i.e., Method 2), I would break the file into 32,768 blocks of 1 MB, hash those blocks using SHA-256 into 32,768 hashes of 256 bits (32 bytes), concatenate the hashes and do a final hash of the resultant concatenated 1,048,576 byte data set to get my final 256-bit hash for the entire file.

    Is Method 2 any less secure than Method 1, in terms of collisions being more possible and/or probable? Perhaps I should rephrase this question as: Does Method 2 make it easier for an attacker to create a file that hashes to the same hash value as the original file, except of course for the trivial fact that a brute force attack would be cheaper since the hash can be calculated in parallel on N cpus?

    Update: I have just discovered that my construction in Method 2 is very similar to the notion of a hash list. However the Wikipedia article referenced by the link in the preceding sentence does not go into detail about a hash list's superiority or inferiority with regard to the chance of collisions as compared to Method 1, a plain old hashing of the file, when only the top hash of the hash list is used.

  • Michael Goldshteyn
    Michael Goldshteyn almost 13 years
    There would not be more hashes using method 2, since only the final hash is shared / stored, not the intermediate ones.
  • Nayuki
    Nayuki almost 13 years
    The fact that hash trees are named after the famous cryptographer Ralph Merkle may give some authority for its security.
  • Michael Goldshteyn
    Michael Goldshteyn almost 13 years
    The question is not whether Method 2 is secure, it surely is. The question is whether or not Method 2 is strictly less secure than Method 1 and, even if theoretically, by what factor. After all, the gain in performance must be balanced with the loss in security, if any, to be justifiable.
  • Nayuki
    Nayuki almost 13 years
    My intuition says that the loss of security is negligible. I acknowledge that because the block size is smaller (1 MB instead of whole file), it might be easier to find a collision or preimage. But at the same time, the block hashes are concealed by the final hash. So I don't know how much the security is affected.
  • Paŭlo Ebermann
    Paŭlo Ebermann almost 13 years
    Still, the tree hash version is not really what is checked in the competition, I think.
  • zaph
    zaph about 8 years
    Note that MD5 is no longer a good hash algorithm and on some platforms SHA-256 is several times faster. "While a 1Mb block would allow an attacker to try 50 per second" what might be the time to a hash collision at that rate. Note: On a lousy iPhone I can perform ~1000 1MB SHA-256 hashes per second.
  • J.J
    J.J about 8 years
    Well, there is a big difference between hashing 1Mb that was created in RAM, and 1Mb read off the disk. But I agree, newer cpu op-code optimizations exist for MD5 and SHA-X. In my hands however, MD5 is always faster than SHA-256. Perhaps im not using the latest software.
  • zaph
    zaph about 8 years
    "While a 1MB block would allow an attacker to try 50 per second" in RAM what might be the time to a hash collision at that rate. Seconds, hours, days, years, eons? I ask this because you propose that a collision is a potential problem so some metric is needed. Note MB is bytes, Mb is bits.
  • J.J
    J.J about 8 years
    Yeah i was being lazy with the notation, but it doesn't really matter either, because my point is the same. Calculating a hash is 10x quicker when the input data is 10x smaller. Since the OP proposes 10s of Gigabytes to 1 Megabyte, this is several orders of magnitude quicker, which is a problem. I don't know what that means practically since it depends on the implementation, the size of the block, the hash you want to collide with, how you want to permute the data, how big is the CPU L2 cache or GPU onboard memory, etc etc.But its definitely going to cause a problem.
  • zaph
    zaph about 8 years
    tldr; collisions are not going to be a problem. Approximate time to finding a collision to a single MD5 hash collision at 50 attempts/sec: (2^127) / (50/sec) = 3.4E36 sec. Converting to a geologic time scale: 100,000,000,000,000,000,000 eons. Faster computers do not really help much. But even this will probably fail if the length of the hash input is constrained. Of course one can always get lucky. (OK, this math is fuzzy but the time is just improbably in any reasonable or unreasonable timeframe. Any have the correct math?)
  • YoniXw
    YoniXw almost 8 years
    From wiki: The Merkle hash root does not indicate the tree depth, enabling a second-preimage attack in which an attacker creates a document other than the original that has the same Merkle hash root.