top | item 42586965

(no title)

JacobiX | 1 year ago

It is fascinating (to me at least) that almost all RSA implementations rely on a probabilistic primality test, even though the probability of picking a pseudoprime is extremely small.

discuss

order

FiloSottile|1 year ago

There's extremely small (like 2⁻²⁰, the chance of winning $50 000 with a $2 Powerball ticket [1]), and then there's cryptographically negligible (like 2⁻¹²⁰, the false negative chance of our primality test). The chance of something cryptographically negligible happening is about the same as the chance of the attacker guessing your key on the first try, or of a cosmic ray flipping a CPU flag inverting the result of "if !signature.verified { return err }".

[1]: https://www.powerball.com/powerball-prize-chart

mras0|1 year ago

I happened to look at this recently, and while I understand the argument (but not the math) of having to do fewer Miller-Rabain rounds, why would you do so in PRACTICAL settings? Unlike ECC you're likely only generating long term keys, so shorter key generation time seems like a bad tradeoff. Composite candidates are going to be rejected early, so you're (with high probability) not doing expensive calculations for most candidates. My reading of [BSI B.5.2](https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Publicat...) confirms this.

Of course random bit flips could interfere, but other measures should thwart this in high-stakes environments (at least to some degree).