top | item 8033779

LibreSSL's PRNG is Unsafe on Linux

179 points| agwa | 11 years ago |agwa.name | reply

146 comments

order
[+] quotemstr|11 years ago|reply
There's also pthread_atfork. Use that to reset the PRNG. It's a bad interface, but it'll work for this purpose. It bothers me when people very solemnly and seriously condemn systems for problems that are, in fact, amenable to reasonable solutions.
[+] agwa|11 years ago|reply
> There's also pthread_atfork.

That requires linking with libpthread, which a single-threaded program would not normally do. Otherwise, it's not a bad suggestion.

Still, on top of everything LibreSSL does to automatically detect forks, it should still expose a way to explicitly reseed the PRNG in an OpenSSL-compatible way, since OpenSSL has made guarantees that certain functions will re-seed the PRNG, and there may be some scenarios where even the best automatic fork detection fails (imagine a program calling the clone syscall directly for whatever reason, in which case pthread_atfork handlers won't be called). Since LibreSSL is billed as a drop-in replacement for OpenSSL, you should not be able to write a valid program that's safe under OpenSSL's guarantees but not when linked with LibreSSL.

[+] colmmacc|11 years ago|reply
http://yarchive.net/comp/linux/getpid_caching.html

Reading that thread it seems like direct calls to clone(2) can bypass at least glibc's pid cache (which would likely also break LibreSSL's approach).

Any idea if direct calls to clone(2) also bypass pthread_atfork?

[+] xorcist|11 years ago|reply
Why would you want to reseed the PRNG?

If the PRNG is good enough (no visible correlation in the statistics tests you can imagine thrown at it), and it's properly seeded with true randomness, then isn't everything peachy?

I am much more afraid of the seeding part of it than the actual algorithm. The algorithms are well studied by smart people, the actual implementation and seeding aren't always.

There mere fact that one could reseed the PRNG makes me nervous. That could be used in devious ways. But I am not a cryptographer, not even a mathematician, so don't take my word for it!

Am I wrong here? Why?

[+] syncsynchalt|11 years ago|reply
As a developer who's integrated with openssl several times (generally on Linux), I couldn't be more pleased with the results coming out of the libreSSL effort.

Even if we end up with a list of linux-specific gotchas (and I don't think we will), it is more a case of ten steps forward, one step back.

[+] hsivonen|11 years ago|reply
Can we, please, have a syscall in Linux that returns random bytes from the system CSPRNG or blocks if not seeded yet and doesn't involve dealing with file descriptors?

But even while one isn't available, why is LibreSSL trying to use a userland CSPRNG instead of always reading from /dev/urandom and aborting when that fails?

[+] AlyssaRowan|11 years ago|reply
Yes. Actually, the relevant IETF list is now calling for that: Linux needs getentropy(2). I may cook up my own and submit it to LKML, or perhaps someone else can, but there's no way out of this one without kernel support.

I don't know why the rest of the function even exists. It's the kind of cruft libReSSL is trying to get rid of.

I am not entirely sure a PRNG should even exist in the library, and personally, I'd pass it onto /dev/urandom or /dev/random or the relevant syscall.

I agree with making it (154-156) a hard kill for a TLS library not to be able to get entropy.

And, this is great! This is exactly the kind of thing we're able to find now that some of the code isn't a hedge-maze.

[+] ot|11 years ago|reply
What are the advantages of a syscall in place of /dev/urandom?
[+] rlpb|11 years ago|reply
Why must it have its own PRNG? Is there a problem asking the kernel (via /dev/urandom) for all required entropy, at the time it is needed? Or would this cause a real-world performance problem?

Surely this is an obvious first question that all commentators are stepping over?

[+] orik|11 years ago|reply
It isn't about performance, but instead /dev/urandom is believed to be a poor source of entropy by the OpenBSD developers.

I believe the heart of the issue it that /dev/urandom will give you a string even if it has very low entropy at the time.

You can find all sorts of articles for and against /dev/urandom and I don't really know enough to comment on it's security, but I trust the that the team working on this fork more than I trust the OpenSSL foundation.

[+] Buge|11 years ago|reply
It mentions chroot jails in which you can't access /dev/urandom.
[+] dobbsbob|11 years ago|reply
>First, LibreSSL should raise an error if it can't get a good source of entropy.

Comments for getentropy_linux.c explain this http://www.openbsd.org/cgi-bin/cvsweb/src/lib/libcrypto/cryp...

We have very few options:

- Even syslog_r is unsafe to call at this low level, so there is no way to alert the user or program. - Cannot call abort() because some systems have unsafe corefiles.

[+] quotemstr|11 years ago|reply
> cannot call abort() because some systems have unsafe corefiles.

This logic seems specious. It's not the job of a library to solve that problem. If a system has crash dump collection configured insecurely, the problem is going to extend well past the SSL library.

> * This can fail if the process is inside a chroot or if file * descriptors are exhausted.

The right solution is to pre-open the file descriptor. SSL_library_init can fail. Do it there.

[+] agwa|11 years ago|reply
The comments don't justify why going to the sketchy entropy is better than SIGKILLing the process, except with:

> This code path exists to bring light to the issue that Linux does not provide a failsafe API for entropy collection.

Trying to make a point about Linux doesn't seem like a very good reason to me.

[+] lcampbell|11 years ago|reply
> Cannot call abort() because some systems have unsafe corefiles.

Huh, FreeBSD has MAP_NOCORE which allows the program to map pages that will explicitly not be included in the core file. I never realized that this was FreeBSD-specific extension (added in 2007?).

I'm really surprised other platforms haven't adopted it, though I surmise there's a good technical reason or two. (EDIT: or maybe there's similar functionality via another API? I haven't been able to turn up anything).

[+] blahrf|11 years ago|reply
You can't simply seed it before a chroot. Look at the code. chacha adds entropy periodically and folds it in. You need entropy in the chroot. The author should probably read 10 lines below the same code he posted in the article. While I'd love to see a solution for this particular contrived example, considering in the much more common use cases it actually is more secure than OpenSSL's. Especially so if your kernel has sysctl in it.
[+] agwa|11 years ago|reply
> chacha adds entropy periodically and folds it in. You need entropy in the chroot.

If that's the case then the fix will not be as simple as I envisioned it. Still, the point stands that LibreSSL should allow you to initialize the PRNG once, before you chroot, so that you can use the PRNG safely once inside the chroot. This could be accomplished by keeping a file descriptor to /dev/urandom open.

[+] cnst|11 years ago|reply
What agwa wants from LibreSSL is to behave in every little bit exactly as OpenSSL does, even though OpenSSL itself is a complete and utter mess.

OpenSSL allowed developers to interfere with RNG freely, so LibreSSL must do that, too? [Even if times have changed?](http://permalink.gmane.org/gmane.os.openbsd.cvs/129485)

Well, you can't really go at improving and cleaning up the library if you have to keep up all the old bugs and the whole crusty API around.

It's inconceivable to expect LibreSSL to be both better than OpenSSL, yet to have the exact same API and the exact same set of bugs and nuances as the original OpenSSL.

LibreSSL is meant to be a simple-enough replacement of OpenSSL for most modern software out there (http://ports.su/) — possibly with some minimal patching (http://permalink.gmane.org/gmane.os.openbsd.tech/37599) of some of the outside software — and not a one-to-one drop-in-replacement for random edge cases that depend on random and deprecated OpenSSL craft.

[+] amalcon|11 years ago|reply
Note that the usual post-fork catch-all security advice (having the child process exec() to wipe process state, thereby making a state leak really hard) solves the fork safety problem by giving the child a whole new PRNG instance, but actually makes it harder to solve the chroot safety problem.

There are various tricks to get a limited number of bytes from /dev/urandom into the chroot jail (such as by writing them to a regular file and secure-erasing that file when finished) to get around that.

[+] agwa|11 years ago|reply
How about passing the /dev/urandom file descriptor to the new process? That seems like the most robust solution to me.
[+] nwmcsween|11 years ago|reply
This is just due to ignorance, Linux provides a AT_RANDOM auxv on process creation that could be used to seed the prng.
[+] enjolras|11 years ago|reply
look at the code. AT_RANDOM is. used when it's avaible in the fallback function. For some reason, the devs don't seem to trust it much, according to the comment.
[+] blahrf|11 years ago|reply
Even though it looks like it won't get called, I'm wondering how bad the voodoo is? Anyone looked at what it is spitting into that hash function? How predictable are those clocks as they change between the memory fetches. Will Linux have predictable memory access times where those pages land?
[+] darthsitius|11 years ago|reply
How can 2 processes have the same PID, even if it is grandparent and grandchild? When I try killing a process using the PID, how the kernel know which to kill?
[+] ryan-c|11 years ago|reply
Assume a fairly busy system

* Original process with PID 17519

* PID 17519 forks producing a new process with PID 26606

* PID 17519 produces some "random" bytes then exits

* PID 26606 forks producing a new process with the now unused PID 17519

* New PID 17519 produces some "random" bytes, which will be the same as the "random" bytes produced by original PID 17519, causing a raptor to attack the user.

[+] _kst_|11 years ago|reply
They can't have the same PID at the same time.

I think the scenario here is that process X forks, creating process Y, then process X terminates, then process Y repeatedly forks until it creates a process Z with the same PID as X.

[+] quotemstr|11 years ago|reply
> How can 2 processes have the same PID,

PID namespaces?

[+] bydo|11 years ago|reply
Schadenfreude?
[+] talideon|11 years ago|reply
Hardly. LibreSSL works just fine on OpenBSD and doesn't have this issue. Portability to Linux is a secondary concern, and this is only an initial stab at the portability layer for Linux.
[+] tormeh|11 years ago|reply
Can't the LibreSSL process just reseed whenever it is started? I guess forks don't actually copy the program counter so they'll have to go through main, right?
[+] cjg_|11 years ago|reply
It does copy the PC. Actually it just return from the fork call twice, one in the parent and one in the child, with different return values, the pid of the child to the parent and zero to the child.

See http://linux.die.net/man/2/fork for more details

[+] Raphael_Amiard|11 years ago|reply
> I guess forks don't actually copy the program counter so they'll have to go through main, right?

This is the way the fork syscall works on all Unices, the fork will start execution right after the fork system call.

[+] agwa|11 years ago|reply
Nope, forks actually do copy the program counter.
[+] meshko|11 years ago|reply
It is not entirely clear what is the risk of this strange scenario involving a grandchild process and pids wrapping around in an alarmingly quick way.
[+] pavpanchekha|11 years ago|reply
The risk is that in some situations (it should not matter how often; the environment might be somewhat attacker-controlled) two processes produce identical random numbers. This is bad, because this breaks the assumption that random numbers are independent. A program may reasonably fork into two processes, one which uses random numbers to generate RSA keys and one which outputs random numbers to anyone who wants them. LibreSSL's flaw may allow these two processes to destroy each other's security guarantee.
[+] _pmf_|11 years ago|reply
Don't rain on their parade. OpenSSL has been determined to be a laughing stock by super-informed internet forum people, and we need to keep up pretending that a rewrite of a major piece of internet infrastructure is feasible and makes sense.
[+] jodiscr|11 years ago|reply
Given things like the Debian OpenSSL fiasco and Heartbleed, can we honestly put as much faith into open source crypto as it's well-funded proprietary counterparts?

I honestly prefer open source and recognize the problem the author points out as clearly significant problem - as well as the benefits of LibreSSL, but I'm just not convinced there are enough eyeballs looking at open source crypto.

[+] pling|11 years ago|reply
Sorry but that's just an idiotic assertion.

Closed source proprietary crypto, you just don't know who wrote it, who audited it and who backdoored it and who knows of any flaws in it.

Open source crypto, it's there. Go read the source. Anyone can and it's open for audit.

There aren't enough eyeballs I agree but there are infinitely more trustworthy people looking at it than closed source.

[+] AlyssaRowan|11 years ago|reply
"Many eyes make bugs shallow", goes the saying; but it's true that that does require that people actually do look at it - and with the state OpenSSL is in it's clear people took it for granted for years. I'm as guilty of that as you. It was ugly and crufty and I'd assumed and hoped that it'd been thoroughly reviewed and was the way it was because it was being conservative with changes; turns out no, actually it's a giant hairball which they're now shaving, BoringSSL is trimming, and LibReSSL is gleefully taking a combine harvester to!

But reflect on this: we're looking at it now. There are more eyeballs looking at open-source crypto than closed source crypto. Reflect on that for a moment, and on the RSA BSAFE/NSA 'enabling' and the like, and remember that being well-funded didn't stop Apple's source-available implementation from going directly to fail.

I wonder, for example, what's really under the hood of, say, Microsoft SChannel/CAPI/CNG? I'm a reverse-engineer (which means I don't need no stinking source code, given a large enough supply of chocolate) so I may look in detail, when I get a large enough patch of free time. I've heard it's not as bad as it could be… but I know on this subject, for example it ships Dual_EC_DRBG as alternative in CNG (but uses CTR_DRBG by default from Vista SP1 onwards, thank goodness). The old RtlGenRandom wasn't too great, I know that much.

[+] yellowapple|11 years ago|reply
Transparency is a dependency of trust. The "well-funded proprietary counterparts" are non-transparent (per the definition of "proprietary software"), and therefore are untrustworthy.
[+] wbl|11 years ago|reply
Ever hear of BSAFE? They took a million dollars from the NSA to implant a backdoor. How do you evaluate code you cannot see?