(no title)
i2km | 11 months ago
The previous comment was suggesting making sure that every code path takes the same amount of time, not adding a random delay (which doesn't work).
And while I agree that power-analysis attacks etc. are still going to apply, the over-arching context here is just timing-analysis
some_furry|11 months ago
I'm not just echoing the argument made by the link, though. I'm adding to it.
I don't think the "cap runtime to a minimum value" strategy will actually help, due to how much jitter your cap measurements will experience from the normal operation of the machine.
If you filter it out when measuring, you'll end up capping too low, so some values will be above it. For a visualization, let's pretend that you capped the runtime at 80% what it actually takes in the real world:
Alternatively, let's say you cap it sufficiently high that there's always some slack time at the end.Will the kernel switch away to another process on the same machine?
If so, will the time between "the kernel has switched to another process since we're really idling" to "the kernel has swapped back to our process" be measurable?
It's better to make sure your actual algorithm is actually constant-time, even if that means fighting with compilers and hardware vendors' decisions.