top | item 38851858

(no title)

hoytech | 2 years ago

This is not necessarily as secure as you might expect.

See Joux's "Multicollisions in Iterated Hash Functions. Application to Cascaded Constructions"

https://link.springer.com/content/pdf/10.1007/978-3-540-2862...

discuss

order

adrian_b|2 years ago

That attack is one of the reasons why the one-way hash functions based on the Merkle-Damgaard structure, like SHA-2 and the older hashes, are considered obsolete for hashing extremely large files.

The newer one-way hash structures, like those used by SHA-3 or by BLAKE/BLAKE2/BLAKE3 are immune to this kind of attacks (because they use either a sponge automaton or a structure similar to Merkle-Damgaard but augmented with a counter input for hashing each block, which is equivalent to using distinct hashing functions for all data blocks, instead of iterating the same hashing function, which can be exploited by the Joux attack mentioned by you).

hoytech|2 years ago

Thanks! Does this attack actually depend on extremely large files in any way? My understanding is that this method can always find fixed-size pre-images for collision attacks, and is unaffected by the pre-image size in a pre-image attack.

That makes sense about newer functions being immune. However, note that both of the cascaded hashes must be immune to prevent this attack (as described in the paper).

dfox|2 years ago

The paper refers to the case where you build a even more secure hash function by concatenating output of multiple presumably secure hash functions. The resulting construction will still be as secure as the strongest component hash function used, which is what you want for the case of long term archival hashes/signatures (the obvious question is exactly what asymetric construction you are going to use to sign the resulting hash which preserves this property).

Of note is that when you look at the multiple parallel executions of an iterated hash function at the level of whatever round structure that it is inside its compression function it becomes quite obvious that the result will not have the security level of 2^(m+n+...) as one would expect from the output length, but somewhere around 2^m+2^n (=2^m if m>n) as there is absolutely no mixing of state between the parallel executions.

hoytech|2 years ago

I linked to the paper because it wasn't clear if OP's goal was to increase the security with the cascaded hashes or not, and also because I personally found this result very surprising and interesting when I first learned about it.

If you have a moment can you please elaborate a little more on your second paragraph? Are you describing applying a similar method inside the compression function of the hash function? Any hash function? Where does the parallelism come in? Thank you!