(no title)
throwaway092834 | 12 years ago
There can be benefits from depending on non-portable implementation details but also significant drawbacks.
throwaway092834 | 12 years ago
There can be benefits from depending on non-portable implementation details but also significant drawbacks.
fulafel|12 years ago
throwaway092834|12 years ago
There are many standards and cross-platform interfaces defined outside of POSIX. Some explicit and top-down, some organically evolved.
The spec for /dev/*random is here, as was published in the mid-90s: http://git.kernel.org/cgit/linux/kernel/git/stable/linux-sta...
tptacek|12 years ago
bostik|12 years ago
Applications, yes. Appliances built on it - now that's more open to interpretation.
Back in 2003/2004 I was building a centrally managed security appliance system. At the time I made the hard choice that first-boot operations (such as generating long-term device keys) MUST use /dev/random. It made the initial installs take longer, but I refused to take the chance that an attacker could install and instrument a few hundred nodes and find out possible problems with entropy sources.
Once the first-boot sequence was over, applications used /dev/urandom for everything. This included the ipsec daemons. Forcing everything to /dev/random during first boot made sure that on subsequent boots there would be (for all practical purposes) enough entropy available for urandom to work securely.
The first-boot problems were amplified by the fact that we were running our nodes inside virtualization. (At the time: UML, and we built our own on top of it. Xen wasn't nearly ready enough back then.)
It's fascinating to see that the problems we had to deal with 10 years ago are now becoming an issue again. To this day I choose to use /dev/random if I need to generate key material shortly after boot (which could be install), or for my own long-term use. Good thing personal GPG keys have a shelf-life of several years...
blazespin|12 years ago
Tomte|12 years ago