top | item 8592430

(no title)

xnull | 11 years ago

> Timing attacks are often the result of optimisations within the crypto library which inadvertently give away information, for example a loop which breaks on X != Y, instead of setting a failed = false bool and continuing to iterate through the rest of the array.

I would say this is false. Simple differences in time caused by cache line ejection in table-lookup implementations of AES provide a very strong timing attack. (http://cr.yp.to/antiforgery/cachetiming-20050414.pdf)

In RSA (and in fact DL based cryptosystems), modular exponentiation without extreme care leak tons of timing information about private exponents. 'Blinding' is one way to handle this, but performant solutions typically fiddle at the bit level and exploit CPU guards and features to minimize branch prediction/cache line/etc leaks.

In higher level languages absolute control and care of crypto implementations can not be taken and the JIT layer adds another layer of obfuscation (though I know of no attack employing that...).

The out for memory safe languages is to provide built in crypto operations that have been implemented at a lower level.

discuss

order

ufo|11 years ago

I'm curious now: how do the AES implementations nowadays avoid the timing attack explained in that paper? From what I understood, its very hard to write an efficient AES implementation without using input-dependent table lookups.

xnull2guest|11 years ago

For the most part by bitslicing. Some implementations calculate the S-box explicitly using the algebraic relationships in the finite field but doing so is awfully slow.

Someone1234|11 years ago

You say it's "false" but then fail to explain why. None of your examples offer that, and your whole explanation boils down to "managed languages are more complex, therefore worse."

Please point me to the specific native features which mitigate timing attacks. Because the majority of fixes I have seen are purely in altering the libraries themselves using high level constructs to remove hot paths and make it so both failure and success state take a constant time to execute (which has nothing to do with managed/unmanaged code).

tptacek|11 years ago

The issue isn't that native code has special features that mitigate timing attacks. It's that you can look at native code and predict its side effects more easily than you can with high-level code.

Another important difference between native code and high-level code is that timing leaks in high-level code tend to be larger. For instance, it's very difficult to exploit a memcmp timing leak in practice. But Java's string comparison, depending on your JVM, is exploitable over the Internet.

For what it's worth: I wouldn't select C over Java simply to avoid timing attacks. Side channels in JVM code are a legit concern, but not a dispositive one.

xnull|11 years ago

> whole explanation boils down to "managed languages are more complex, therefore worse."

I hope that's not what I said...

> Please point me to the specific native features which mitigate timing attacks.

How am I supposed to implement bitslicing to vectorize operations in Java? I can't. Fine grained control of code is important for implementations of ciphers that are both fast and side-channel free. Fine grained control isn't something Java can give you, by definition.

Take the 'countermeasures' section of 'Efficient Cache Attacks on AES, and Countermeasures' (http://www.cs.tau.ac.il/~tromer/papers/cache-joc-20090619.pd...).

I count exactly two countermeasures that apply to high level languages. Of the first they say "We conclude that overall, this approach (by itself) is of very limited value" and of the second "beside the practical difficulties in implementing this, it means that all encryptions have to be as slow as the worst case... neither of these provide protection against prime+probe/etc".

The rest of the countermeasures suggest bitslicing, use of direct calls to hardware instructions, memory alignment tricks, invocation of hardware modes (i.e. to disable caching), forcing cache ejections, normalizing cache states on interrupt processing, etc.

It is purely the case that high level languages do not offer you the flexibility and control to implement side-channel free crypto.

Crypto is brittle. High level languages are awesome for so many things. But bitslicing isn't one of them. The entire premise of high level languages is that you are freed from working directly on the innards pertinent to the specific target architecture. The entire premise of side-channel free crypto is that you need visibility and control of exactly these things.