top | item 6733806

A Roster of TLS Cipher Suites Weaknesses

43 points| dpifke | 12 years ago |googleonlinesecurity.blogspot.com | reply

19 comments

order
[+] zhuzhuor|12 years ago|reply
The second nit with AES-GCM is that, as integrated in TLS, implementations are free to use a random nonce value. However, the size of this nonce (8 bytes) is too small to safely support using this mode. Implementations that do so are at risk of a catastrophic nonce reuse after sending on the order of a terabyte of data on a single connection. This issue can be resolved by using a counter for the nonce but using random nonces is the most common practice at this time.

I don't know how do you integrate AES-GCM with TLS, but I have to say

1. The secure AES-GCM supports 96-bit nonces. It's 12 bytes, not 8 bytes mentioned in the article.

2. Nonce is nonce. It shouldn't be chosen at random (as random IVs). As long as nonces are not reused, GCM should be secure.

3. I don't believe implementing a secure random number generator is more efficient than maintaining an incremental counter.

Edited for typos

[+] agl|12 years ago|reply
> The secure AES-GCM supports 96-bit nonces

That's correct. However, TLS takes four bytes from the handshake key material and uses them as the first four bytes of the nonce. The remaining 8 bytes are all that vary over the lifetime of the connection.

[+] tlsrc4|12 years ago|reply
It's worth noting that that Jacob Appelbaum, who has worked on the Snowden documents alongside Laura Poitras, claims that TLS has been broken in real time by the NSA: https://twitter.com/ioerror/status/398059565947699200

In which case AES-CBC is almost certainly preferable to RC4, even with its flaws.

[+] tptacek|12 years ago|reply
Never say never, but, this seems unlikely.

The nature of the RC4 flaw isn't such that attackers grind on a single ciphertext with fast compute. The problem is rather a series of statistical biases that recur at intervals in the keystream. The time you spend attacking RC4 isn't due to compute, but rather due to the number of samples you need to collect to leverage the biases to recover plaintext. You can imagine improvements on the attack that would require fewer samples, but probably not improvements that would get you down to double-digit samples.

There may indeed be a lot of room for attacks on RC4 to improve, and improve in ways that outpace (nonexistent) countermeasures. I think RC4 is scarier than CBC padding timing. But a real-time attack on RC4 would seem to imply a radically different attack on RC4 than any the literature has hinted at.

[+] bennyg|12 years ago|reply
How related are these encryption vulnerabilities to encryption we may do in our own software (stuff that necessarily doesn't depend on SSL connections)? As in, is AES-CBC still okay to use if I'm using it correctly in a program that is self contained (ie it's not going to the net and using AES-CBC to encrypt the communication between server and program)?
[+] agwa|12 years ago|reply
The RC4 vulnerability is in the algorithm, not in TLS. Do not use RC4 in your own software. (If you absolutely must, you can avoid this vulnerability by using a variant of RC4 that discards the first several thousand bytes of keystream, but please just use something else.)

The AES-CBC vulnerabilities are specific to TLS, so as long as you don't repeat the same mistakes TLS made, you won't have the same vulnerabilities. Specifically, encrypt-then-MAC (instead of MAC-then-encrypt) to avoid padding oracle attacks like Lucky 13, and actually choose new IVs for each message instead of using the previous message's last ciphertext block (to avoid BEAST). Or better yet, use a high-level crypto library that doesn't make you worry about this stuff.

[+] tptacek|12 years ago|reply
They are highly related. For instance, if you are using AES-CBC in your own program, but are not composing it with a MAC properly, you too are likely vulnerable to a padding oracle attack, and that attack is likely to be much easier to execute than Lucky 13.

In reality, you can safely look at TLS as a "best case" scenario; the number of vulnerabilities that commonly arise from homegrown crypto ("using AES in a self-contained program") are a large superset of the ones that have arisen in TLS.

[+] bradleyjg|12 years ago|reply
When I was looking through a list of browser supported PFS TLS suites, I came across this one which is apparently supported across versions of IE that otherwise don't support PFS:

DHE-DSS-AES256-SHA

I know DHE is slow, but what really sticks out like a sore thumb is DSS. Is it known to be broken?

[+] ctz|12 years ago|reply
That ciphersuite has the following problems:

- DHE: the way ephemeral DL-DH works in TLS is unfortunately misdesigned. The client first offers DHE-* ciphersuites, then the server sends a DL group and public key in that group. The problem arises because now the client cannot:

* reject that group as having too small a modulus to possibly meet the client's security requirements,

* reject that group because it doesn't support one that big (Java SSL stack does this -- doesn't support >1024-bit modulus DH -- and is therefore broken with some more aggressive servers when they select DHE ciphersuites),

* check that the subgroup is of a suitable size to meet the client's security requirements (I'm not sure if SSL predated the Lim Lee paper here, but certainly later SSL standards didn't bother to fix it)

- DSS: fine, but the security-performance profile is similar to RSA, with the exception that verification in DSA is much slower than RSA. That's the reason it's mostly overlooked in favour of RSA.

- AES: as mentioned in the article.

So, yes. It's about as good as other SSL-era ciphersuites; which is to say: slow, badly designed and mostly broken :)

[+] sdevlin|12 years ago|reply
Nit: "Paterson", not "Peterson".
[+] agl|12 years ago|reply
Doh! Thanks. I'll ask PR to fix that.