(no title)
Cascais | 1 year ago
Some remarks that I found interesting on the topic:
-While compatibility/reliability are 110% nice (compatibility being defined as "it works"), that doesn't mean full stability in generating entropy. "Components may be perfect; composition(they all together) can still be flawed", where the components are: Device Hardware, Device OS, and Device Software (KeyGen)"
- "in low-margin devices there aren't high-quality entropy sources to rely on", so its harder to know for sure that key was well generated.
- a large scale on RSA keys enabled the detection of entropy failures that manifested in the RSA keys of millions of devices. Most affected product families were lower-margin devices past their end-of-support date.
https://www.acsac.org/2023/program/final/s111.html https://www.acsac.org/2023/files/web/slides/chi-111-weakrsak... https://samvartaka.github.io/cryptanalysis/2017/01/03/33c3-e...
dlenski|1 year ago
The 2012 Heninger paper (https://www.usenix.org/system/files/conference/usenixsecurit...) found quite a high number of duplicate TLS keys across seemingly-independent hosts, and attributed it to this issue:
Over the next few years there was quite a lot of work, including in the Linux kernel, on improving the entropy sources available to such devices, and making them more foolproof to use. https://lwn.net/Articles/724643/
The issues identified in this survey are related, but distinct. The Debian weak keys generated in 2006-8 are due to a straight up bug in Debian, and RSA keys that are of a too-small size are orthogonal. I found far fewer "inexplicable duplicate" TLS keys than Heninger et al did in 2012.
Cascais|1 year ago