I happened to look at this recently, and while I understand the argument (but not the math) of having to do fewer Miller-Rabain rounds, why would you do so in PRACTICAL settings? Unlike ECC you're likely only generating long term keys, so shorter key generation time seems like a bad tradeoff. Composite candidates are going to be rejected early, so you're (with high probability) not doing expensive calculations for most candidates. My reading of [BSI B.5.2](https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Publicat...) confirms this.Of course random bit flips could interfere, but other measures should thwart this in high-stakes environments (at least to some degree).
FiloSottile|1 year ago
The problem with "x should be enough, but why not do more?" arguments is that they can be applied recursively, and never answer the question "ok so when should we stop?"
mras0|1 year ago