Over the past year and a half I've had it drummed into me that fast hashing is bad and slow hashing is good. If this is so, why have they optimised this new hashing algorithm to be as fast as possible?
Password hashes and general-purpose crypto hashes are not the same thing. If it helps, try using the technical term for the kinds of functions cryptography offers for password hashing: key derivation functions (KDFs).
To expand on what has already been noted: a good place for fast hashing is when you're using hashing millions of objects. For example, if I wanted to parse thousands of feeds and store items using a unique hash as their key, I'd want a fast hash that's collision resistant (for the uniqueness part) and one that's fast (for the millions of objects). With password hashing (KDFs, as tptacek noted), the collision resistance is useful, but we want to keep it fairly slow to avoid being able to brute force attacks.
fast hashing is bad for storing one way functions of passwords because it makes them easier to attack and make many guesses about the original value from the transformed value, hence all the noise about using slow "key derivation functions" instead. for all other common use cases, faster is better, all else equal.
I'm not sure how reduced memory requirements are a benefit to encryption. Are there really any low end systems still in use today that actually have issues with memory usage? Even $150 notebooks come with 1 gig of ram. This seems more like it would help save ram on interception devices like the Narus Device or the huge datacenters owned by the NSA, which have a huge issue with storing all the data required to intercept and decrypt eavesdropped communications reliably.
I think of reduced memory requirements as a benefit to every application, including encryption. That's because nearly every application I use competes for memory; I also use memory as general storage media for stuff for which I want fast I/O and/or don't need to save permanently (e.g. mfs or tmpfs mounts). I think of memory as a precious resource.
The issue is with the processor's cache memory, which is small, typically 64 kiB or so. (The TLB cache is also important.) Even a few hundred bytes savings can give an important improvement in speed amd power consumption.
It's becoming more common. By the way, if you use OS X, your utilities output megabytes (1000 bytes), not mebibytes. GNU utilities switched to KiB, MiB, etc. (1024 bytes).
I agree that it looks odd. However, especially in applications like cryptography, I think that removing the ambiguity of "megabyte" (i.e. do we mean 10^6 or 2^20 bytes?) is worth the introduction of a new term.
[+] [-] drakeandrews|13 years ago|reply
[+] [-] tptacek|13 years ago|reply
[+] [-] rmccue|13 years ago|reply
[+] [-] quonn|13 years ago|reply
[+] [-] tsewlliw|13 years ago|reply
[+] [-] newman314|13 years ago|reply
[+] [-] dchest|13 years ago|reply
[+] [-] IheartApplesDix|13 years ago|reply
[+] [-] theatrus2|13 years ago|reply
[+] [-] s353|13 years ago|reply
[+] [-] dchest|13 years ago|reply
[+] [-] unknown|13 years ago|reply
[deleted]
[+] [-] Daniel_Newby|13 years ago|reply
The issue is with the processor's cache memory, which is small, typically 64 kiB or so. (The TLB cache is also important.) Even a few hundred bytes savings can give an important improvement in speed amd power consumption.
[+] [-] rb2k_|13 years ago|reply
You can't just decide that people aren't supposed to say "megabyte" anymore
[+] [-] dchest|13 years ago|reply
It's becoming more common. By the way, if you use OS X, your utilities output megabytes (1000 bytes), not mebibytes. GNU utilities switched to KiB, MiB, etc. (1024 bytes).
[+] [-] idbfs|13 years ago|reply
[+] [-] Heliosmaster|13 years ago|reply