top | item 42588697

(no title)

mras0 | 1 year ago

I happened to look at this recently, and while I understand the argument (but not the math) of having to do fewer Miller-Rabain rounds, why would you do so in PRACTICAL settings? Unlike ECC you're likely only generating long term keys, so shorter key generation time seems like a bad tradeoff. Composite candidates are going to be rejected early, so you're (with high probability) not doing expensive calculations for most candidates. My reading of [BSI B.5.2](https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Publicat...) confirms this.

Of course random bit flips could interfere, but other measures should thwart this in high-stakes environments (at least to some degree).

discuss

order

FiloSottile|1 year ago

The number of Miller-Rabin rounds has to be bounded, so if you're not going to base your bound on reaching a cryptographically negligible change of false positives, what are you going to base it on? Should we do 10? 15?

The problem with "x should be enough, but why not do more?" arguments is that they can be applied recursively, and never answer the question "ok so when should we stop?"

mras0|1 year ago

The BSI recommendation I linked says 60 times here (considering the worst case and the security level they're targeting). Just wondering why we'd want to do less for a (presumably rare) one-time operation.