> Since the number of times it would be able to flip the bit changes due to random fluctuations in time due to context switching of processes, this generates an arguably truly random bit (I would love to see a PoC that shows otherwise, however).
This wouldn't be true random. It's just using the time jitter introduced by context switching to introduce entropy. Similar to many other pRNGs that use system entropy, mouse movements, etc in order to seed the pRNG.
A pRNG is 'p' because if you know the conditions used, you can deterministically predict the outcome. The difficulty of recreating those conditions has nothing to do with being truly random.
CSPRNGs require random seeds. They're the obvious right thing when you can generate 256 (or whatever your security level is) bits that are truly unpredictable, independent, and of equal probability 0 and 1, but when generating significantly more is hard. If you can't generate 256 truly unpredictable bits, the CSPRNG doesn't help you much. And if you can generate an unbounded number of them, it's not clear that there's a downside to just using the TRNG data (but there's certainly the argument that it's better-studied and more robust to use a CSPRNG anyway).
Whether this particular method of random bit generation is in fact sound is a different question entirely (and worth asking of all ways to generate those seeds, including the Linux kernel's entropy divination code).
It's expensive; it spins for a millisecond to obtain one bit.
Suppose this is running on a quiescent system with no interrupts or other tasks running. I can see it degenerating into deterministic behavior. Suppose the RTC counter and the CPU clock are from the same master clock, and the code is executing cleanly to the clock from on-chip caches.
Ultimately, the question is: is there really one bit of entropy from each call to get_bit(), and under what conditions?
Analysis of the output can only disprove randomness. For example, not knowing the key it is impossible to condemn the deterministic output of AES-CTR (assuming the properties of AES hold).
The author really needs to cut back the grandiose claims. Projects like this are more part of the learning process, than something useful to others.
The big concern I have is how reliable this is on virtual machines. Just about all physical machines I'd want to use have high-quality, trustworthy-within-my-threat-model (i.e., "if the NSA wanted to attack my silicon, there's easier silicon for them to attack") hardware random number generators, and all physical machines I'd want to use pick up sufficient randomness from the kernel's entropy magic thing. But virtual machines often don't have access to the hardware RNG, and they don't have access to enough other hardware to populate entropy. It seems like this technique would be particularly risky there ... although I don't think anyone's published an attack on DakaRand yet, so maybe it's fine!
It's definitely based on my approach, but it's missing the concept a bit. The only way this approach gets entropy is if you cross two clocks at very different speeds, and get randomness from the mismatched tolerances. For example, using a computer's microsecond accurate clock to measure a human's 100 millisecond scale behavior yields bits, because we can't be microsecond accurate even if we try.
The bitflipping I was exploring involved matching the CPU clock (nanosecond scale) with the real time clock (millisecond scale). This of course has some risk because the OS can easily implement the latter with the former. And in fact, in this implementation, that's actually what happens -- he's measuring the number of bit flips at nanosecond accuracy. Output is distinguishable from PRNG, as seen elsewhere.
If I remember right somebody did break my "Defcon Challenge" with Firefox.
I'm really hesitant to even consider this without it being run through the Diehard Tests[0], since from my understanding, "True Randomness" should be cryptographically secure should this be used in a CSPRNG.
rng-tools includes a command for testing random number generator output. I tried this with a FreeBSD port from https://github.com/waitman/rngtest
Results aren't great:
rngtest: bits received from input: 300032
rngtest: FIPS 140-2 successes: 0
rngtest: FIPS 140-2 failures: 15
rngtest: FIPS 140-2(2001-10-10) Monobit: 15
rngtest: FIPS 140-2(2001-10-10) Poker: 15
rngtest: FIPS 140-2(2001-10-10) Runs: 15
rngtest: FIPS 140-2(2001-10-10) Long run: 0
rngtest: FIPS 140-2(2001-10-10) Continuous run: 0
rngtest: input channel speed: (min=236.690; avg=245.396; max=251.742)bits/s
A relatively recent (2013) library implementation of the diehard tests plus additional tests.
Some things to note:
The OP implementation is going to be VERY dependent on OS/language/etc. For example even if it works in C it won't necessarily work in JVM. It probably will not work on a microcontroller.
Assume a fixed bias: A coin flips 90% heads/ 10% tails. To make it 50/50, you can demand either a HT to make a heads, or TH to make a tails. Since HT and TH have the same odds of occurring, you get a 50/50 chance.
What if get_bit happens to be deterministic on a particular machine? Then get_fair_bit would be stuck in an infinite loop. This can potentially happen when CPU is so overloaded that executing the bit-flipping instruction takes longer than a millisecond.
Wouldn't an example of a truly random number generator be to open up a pseudo random webpage (say something like www.engadget.com/page/<pseudo_random_number>) and do a modulo count of the number of whitespace separated words/tokens?
typical round-robin times are on the order of 10ms, so you would have a significant amount of non-context-switched 'random' numbers, which when combined with analysis using the cycle counter.... namely a counter running at the clock-speed of your processor would yield a very non-random value.
Ego has infected this thread. We can't just say that this project is interesting. We have to talk about "expensive" computation. Why? What does "ambition" have to do with computer science?
[+] [-] balls187|10 years ago|reply
This wouldn't be true random. It's just using the time jitter introduced by context switching to introduce entropy. Similar to many other pRNGs that use system entropy, mouse movements, etc in order to seed the pRNG.
A pRNG is 'p' because if you know the conditions used, you can deterministically predict the outcome. The difficulty of recreating those conditions has nothing to do with being truly random.
[+] [-] jsonninja|10 years ago|reply
I know most of you take this stuff seriously in your codes and rely on the well know cryptographically secure random number generators: https://en.wikipedia.org/wiki/Cryptographically_secure_pseud...
[+] [-] geofft|10 years ago|reply
Whether this particular method of random bit generation is in fact sound is a different question entirely (and worth asking of all ways to generate those seeds, including the Linux kernel's entropy divination code).
[+] [-] DanielStraight|10 years ago|reply
I find it very unlikely this will be shown to be better than existing RNG solutions.
It's clever, but clever in the way that sleep sort is clever, at least until proven to be of actual benefit.
[+] [-] kazinator|10 years ago|reply
Suppose this is running on a quiescent system with no interrupts or other tasks running. I can see it degenerating into deterministic behavior. Suppose the RTC counter and the CPU clock are from the same master clock, and the code is executing cleanly to the clock from on-chip caches.
Ultimately, the question is: is there really one bit of entropy from each call to get_bit(), and under what conditions?
[+] [-] mindslight|10 years ago|reply
The author really needs to cut back the grandiose claims. Projects like this are more part of the learning process, than something useful to others.
[+] [-] geofft|10 years ago|reply
See also Kaminsky's implementation of the same approach in pure JS: https://gist.github.com/PaulCapestany/6148566
and Ryan Finnie's implementation in Perl: http://www.finnie.org/2012/08/14/twuewand-2-0-released/
The big concern I have is how reliable this is on virtual machines. Just about all physical machines I'd want to use have high-quality, trustworthy-within-my-threat-model (i.e., "if the NSA wanted to attack my silicon, there's easier silicon for them to attack") hardware random number generators, and all physical machines I'd want to use pick up sufficient randomness from the kernel's entropy magic thing. But virtual machines often don't have access to the hardware RNG, and they don't have access to enough other hardware to populate entropy. It seems like this technique would be particularly risky there ... although I don't think anyone's published an attack on DakaRand yet, so maybe it's fine!
[+] [-] dakami|10 years ago|reply
The bitflipping I was exploring involved matching the CPU clock (nanosecond scale) with the real time clock (millisecond scale). This of course has some risk because the OS can easily implement the latter with the former. And in fact, in this implementation, that's actually what happens -- he's measuring the number of bit flips at nanosecond accuracy. Output is distinguishable from PRNG, as seen elsewhere.
If I remember right somebody did break my "Defcon Challenge" with Firefox.
[+] [-] Natanael_L|10 years ago|reply
[+] [-] caffeinewriter|10 years ago|reply
[0]: https://en.wikipedia.org/wiki/Diehard_tests
[+] [-] Freaky|10 years ago|reply
Results aren't great:
[+] [-] daveguy|10 years ago|reply
http://www.phy.duke.edu/~rgb/General/dieharder.php
A relatively recent (2013) library implementation of the diehard tests plus additional tests.
Some things to note:
The OP implementation is going to be VERY dependent on OS/language/etc. For example even if it works in C it won't necessarily work in JVM. It probably will not work on a microcontroller.
[+] [-] gjmulhol|10 years ago|reply
It is a cool idea.
[+] [-] saganus|10 years ago|reply
https://www.random.org/
Which even includes real-time statistics on the quality of the numbers.
[+] [-] jamesbowman|10 years ago|reply
https://en.wikipedia.org/wiki/Fair_coin
[+] [-] valarauca1|10 years ago|reply
[+] [-] micaeked|10 years ago|reply
Stated another way, the algorithm for getting an unbiased bit from a biased coin is:
1. Flip the coin twice.
2. If it's both heads or both tails, discard result and go back to 1.
3. If it's heads then tails, result = 0
4. If it's tails then heads, result = 1
[+] [-] oh_sigh|10 years ago|reply
[+] [-] Grue3|10 years ago|reply
[+] [-] coldcode|10 years ago|reply
[+] [-] spamfilter247|10 years ago|reply
[+] [-] anishathalye|10 years ago|reply
[+] [-] wyager|10 years ago|reply
[+] [-] cyphar|10 years ago|reply
[+] [-] TickleSteve|10 years ago|reply
typical round-robin times are on the order of 10ms, so you would have a significant amount of non-context-switched 'random' numbers, which when combined with analysis using the cycle counter.... namely a counter running at the clock-speed of your processor would yield a very non-random value.
[+] [-] szc|10 years ago|reply
[+] [-] highCs|10 years ago|reply
[+] [-] arno1|10 years ago|reply
1. ("true"-random) 10 iterations; 32 random bytes | Result Avg Entropy: 4.82
2. ("true"-random) 100 iterations; 32 random bytes | Result Avg Entropy: 4.37 3. ("true"-random) 200 iterations; 32 random bytes | Result Avg Entropy: 4.35 1. (openssl) 10 iterations; 32 random bytes | Result Avg Entropy: 4.88 2. (openssl) 100 iterations; 32 random bytes | Result Avg Entropy: 4.87 3. (openssl) 200 iterations; 32 random bytes | Result Avg Entropy: 4.88 1. (/dev/urandom) 10 iterations; 32 random bytes | Result Avg Entropy: 4.82 2. (/dev/urandom) 100 iterations; 32 random bytes | Result Avg Entropy: 4.89 3. (/dev/urandom) 200 iterations; 32 random bytes | Result Avg Entropy: 4.88[+] [-] gberger|10 years ago|reply
[+] [-] egoegoego|10 years ago|reply
[+] [-] dang|10 years ago|reply
[+] [-] jsprogrammer|10 years ago|reply
[+] [-] unknown|10 years ago|reply
[deleted]