top | item 9591739

NSA in P/poly: The Power of Precomputation

286 points| evanb | 11 years ago |scottaaronson.com | reply

34 comments

order
[+] tptacek|11 years ago|reply
But maybe the right lesson to draw is mod-p groups and elliptic-curve groups both seem to be pretty good for cryptography, but the mod-p groups are way less good if everyone is using the same few prime numbers p, and the elliptic-curve groups are way less good if everyone is using the same few elliptic curves.

I don't know about the mod-p analysis, but that would be an... idiosyncratic... conclusion to draw about modern elliptic curve cryptography. The trend seems very much in the opposite direction, towards standardization and reliance on a few very carefully designed curves (optimized for performance on modern CPUs, and to avoid implementation pitfalls that give rise to vulnerabilities with older curves).

In fact, drawing any kind of conclusion about the diversity of fields in use seems a little strange. Yes, if you're going to use 1024-bit prime group, you definitely want to be using one very few others use. But just don't use 1024-bit prime groups. It doesn't much matter how many people use your (properly selected) 2048 bit group; no amount of precomputation puts it in reach of attackers.

[+] pdkl95|11 years ago|reply
> towards standardization and reliance on a few very carefully designed curves

That was the subject of djb's recent talk[1]. He brings up the very good point that we don't know that curve wasn't designed to have some weakness that was creator has kept hidden. This includes the case where a "random", "not biased" generation method, which only moves the attack point back one step.

[1] https://news.ycombinator.com/item?id=9568659

[+] bostik|11 years ago|reply
I would say there's a balancing problem.

The best practices change over time, but old literature, and especially old online resources, have lots of inertia. We can call it the stagnating force of cargo culting, or the dark side of PageRank. The literature I remember reading clearly stated that in DH the security lies in the data used for key exchange and not the parameters. The use of well-known parameters are in fact encouraged. You state the same:

> Yes, if you're going to use 1024-bit prime group, you definitely want to be using one very few others use. [...] It doesn't much matter how many people use your (properly selected) 2048 bit group; no amount of precomputation puts it in reach of attackers.

Using standard, known-secure parameters is clearly a winner here. So: the best practice is to use provided DH parameters. What do people deploying applied cryptography do? They use parameters available from their chosen crypto library. Parameters, that were set as defaults years earlier.

Around year 2000, it was computationally (relatively) expensive to generate your own DH group. And, because using your own group makes it harder to bootstrap trust, it would have added complexity in an already complex mechanism. Hence, the best practice of using known DH group made even more sense.

The fact that Logjam mitigation strategy includes the recommendation to generate your own DH group flies against the face of established DH best practices. (Can we just assume that the modern crypto libraries will catch a faulty RNG and refuse to expose clearly vulnerable parameters? For sake of brevity, at least?) Those deploying applied crypto in the field are now faced with conflicting information. The established and easily discovered (PageRank[tm]) literature states that the best practice is to use known DH groups. The same literature also states that for compatibility reasons, using >1024 bit groups are not recommended. Now contrast this with the Logjam paper: those who require compatibility are suddenly in the need of their privately generated DH parameters. So suddenly the best practice is to ignore established best practice?

That's a nasty conflict.

Of course the sane and secure route was to simply make sure all servers were using ECDHE. But if you need to serve clients that are using ancient browsers, you probably still have to accept the known 1024 DH groups too.

[+] floody-berry|11 years ago|reply
Yes, how does he go from DUAL_EC_DBRG being backdoored to using the same few elliptic curves being an issue? It's not accurate or productive to lump e.g. Curve25519 or Goldilocks448 in with DUAL_EC_DBRG and issues with 1024-bit DH primes.

I feel like I'm misinterpreting what he meant, but can't see what other point he could be making.

[+] harshreality|11 years ago|reply
What are the motives for retaining the 1024 bit static dhparams? Wasn't it older Java that had trouble with larger dhparams? Nevertheless, couldn't this be mitigated somewhat by having openssl[1] dynamically generate dhparams? Isn't that what they do for ECDHE? 1024 bit DHE would still be crackable by the NSA, but with dynamic params it would take much longer since they wouldn't get to precompute anything.

[1] Do schannel and other ssl implementations use static dh parameters by default, like openssl does, or do they dynamically generate them? And do they default to 1024 bit or are they smarter about it and default to match the RSA key size?

[+] mkempe|11 years ago|reply
Whenever the NSA deliberately lets people and companies in the USA continue to use a broken system for "secure" communications or transactions, they are putting all of them at risk instead of helping protect them -- in direct contradiction with the NSA's official charter.

Many other nations have the financial and technical ability to perform the same attacks, and the inclination to do so. And soon, other entities will, too.

[+] mortehu|11 years ago|reply
Am I correct in assuming that if I have generated a 2048 bit dhparam.pem file which I pass to my web server, I am not using one of those "same few primes"?
[+] meowface|11 years ago|reply
Yes. And even if you did happen to be sharing a 2048 bit prime number with every other server in the world, it would still be essentially impossible for the NSA to crack in our lifetime (without massive advancements in quantum computing at least). 1024 bit is feasible for a powerful nation-state; 2048 bit is not.
[+] higherpurpose|11 years ago|reply
> A third solution is to migrate to elliptic-curve cryptography (ECC), which as far as anyone knows today, is much less vulnerable to descent attacks than the original Diffie-Hellman scheme. Alas, there’s been a lot of understandable distrust of ECC after the DUAL_EC_DBRG scandal, in which it came out that the NSA backdoored some of NIST’s elliptic-curve-based pseudorandom generators by choosing particular elliptic curves that it knew how handle.

But maybe the right lesson to draw is mod-p groups and elliptic-curve groups both seem to be pretty good for cryptography, but the mod-p groups are way less good if everyone is using the same few prime numbers p, and the elliptic-curve groups are way less good if everyone is using the same few elliptic curves. (A lot of these things do seem pretty predictable with hindsight, but how many did you predict?)

And that's why we need browsers to support curves other than NIST's P-256 and such. I know Chrome intents to, and I imagine Firefox isn't far behind. What's Microsoft's plan for the Edge browser regarding this? I haven't seen them say anything about it in all of their recent Edge-related posts.

[+] netheril96|11 years ago|reply
Can anyone explain to me why some implementations only support 512-bit or 1024-bit parameter? Aren't the algorithms the same for all sizes? Why can't a certain implementation handle arbitrarily large parameters?
[+] alexbecker|11 years ago|reply
Probably because for performance reasons, much of this is hard coded. Buffers of a fixed length, etc.
[+] ColinWright|11 years ago|reply
A random thought: Perhaps excellent implementations explicitly limit themselves to a specific key size, like 1024 bits, because then the implementation can ensure constant-time computations. We know that there are exotic timing attacks, and these would be more difficult to thwart if we have to use general implementations to cope with arbitrary length keys.

Is that a reasonable argument for the inertia over key sizes?

[+] xnull2guest|11 years ago|reply
I don't think so. The bid for constant-time implementations is fairly new whereas the key size issue has a long history. What I hear all the time in the practice is speed and backwards compatibility.
[+] dlitz|11 years ago|reply
Hopefully now OpenSSH (and the SSH RFCs) will drop "diffie-hellman-group1-sha1" from the default list of allowed KexAlgorithms.