What would happen if you did rejection sampling one bit at a time?
Find the largest multiple of your limit which is less than W. We want to find a random number less than this multiple, because then we can just divide down to get an unbiased result. Maybe this is already implied in how these algorithms are set up, not sure.
Generate a candidate random number. If it is smaller than the multiplied limit, you are done. If not, double it, and add a random bit. Repeat until success.
To supply the random bits, generate a random number, and use it up one bit at a time. If you need more, generate another random number.
I suppose this doesn't work, because the second-most-significant bit of a rejected candidate isn't unbiased. If the multiplied limit is 3/4 of W, then we only reject candidates where the top two bits are both 1!
Can you rescue this by discarding more than one bit when you reject?
Probably not. Think about the case where the multiplied limit is W-1. The only candidate you would reject is all 1s. You'd have to discard all the bits for the next candidate to be unbiased.
TFA's "really divisionless" algorithm is basically what you suggest, except it adds 32 or 64 bits of randomness in each iteration, instead of 1 random bit.
I don't think I've ever seen anyone use division to get a ranged random number. I copied all of the examples I saw, and every one of them multiplied n by a randomized floating point number in the interval of [0,1) and take the whole number part.
The only time I hear about 'other ways' is here on Hacker News.
Those examples are all relatively low-level. Many high-level languages provide a floating point interface on top of such an underlying integer pseudo randomness algorithm. This is the reason why much high-level code use multiplication to get a ranged value.
This post goes into some detail about how the V8 JavaScript engine creates a [0, 1) floating point pseudo random value from its underlying integer pseudo-random algorithm: https://v8.dev/blog/math-random
The implementation you describe would be slightly biased (for any n that is not an integer power of 2), due to the finite precision of floats. This makes it unsuitable for certain applications.
This seems as it was a fun project. I'd caution people not to take it as a "better" way to generate cryptographically secure random numbers. It's just a method that is used, which is good enough for lots of instances that need a randomish number.
> I copied all of the examples I saw, and every one of them multiplied n by a randomized floating point number in the interval of (0,1] and take the whole number part.
Really? I would have expected at least a 20:1 ratio in favor of [0,1) over (0,1].
Most obviously, (0,1] can't produce a uniform distribution, because the input value 1 produces its own unique output value. Whereas the input value 0 shares its output value with the rest of the interval (0, 1/n).
Most JavaScript codebases that I have read over use the division based method, it’s rare to find one that uses a random floating point multiplied by the limit (outside of Math.random(), I’m talking about crypto.getRandomBytes() implementations).
I think this is likely due to the nature of JavaScript’s whacky integer and float implementations.
I'm curious in what field (of programming) are these examples. The reason is because I cannot recall a single rng I've looked at that uses floating point. Must be a different use case
[+] [-] mildbyte|3 years ago|reply
[+] [-] fanf2|3 years ago|reply
[+] [-] twic|3 years ago|reply
Find the largest multiple of your limit which is less than W. We want to find a random number less than this multiple, because then we can just divide down to get an unbiased result. Maybe this is already implied in how these algorithms are set up, not sure.
Generate a candidate random number. If it is smaller than the multiplied limit, you are done. If not, double it, and add a random bit. Repeat until success.
To supply the random bits, generate a random number, and use it up one bit at a time. If you need more, generate another random number.
I suppose this doesn't work, because the second-most-significant bit of a rejected candidate isn't unbiased. If the multiplied limit is 3/4 of W, then we only reject candidates where the top two bits are both 1!
Can you rescue this by discarding more than one bit when you reject?
Probably not. Think about the case where the multiplied limit is W-1. The only candidate you would reject is all 1s. You'd have to discard all the bits for the next candidate to be unbiased.
Oh well.
[+] [-] fanf2|3 years ago|reply
[+] [-] hinkley|3 years ago|reply
The only time I hear about 'other ways' is here on Hacker News.
edit: range notation
[+] [-] adunk|3 years ago|reply
Those examples are all relatively low-level. Many high-level languages provide a floating point interface on top of such an underlying integer pseudo randomness algorithm. This is the reason why much high-level code use multiplication to get a ranged value.
This post goes into some detail about how the V8 JavaScript engine creates a [0, 1) floating point pseudo random value from its underlying integer pseudo-random algorithm: https://v8.dev/blog/math-random
[+] [-] Retr0id|3 years ago|reply
[+] [-] IncRnd|3 years ago|reply
[+] [-] thaumasiotes|3 years ago|reply
Really? I would have expected at least a 20:1 ratio in favor of [0,1) over (0,1].
Most obviously, (0,1] can't produce a uniform distribution, because the input value 1 produces its own unique output value. Whereas the input value 0 shares its output value with the rest of the interval (0, 1/n).
[+] [-] EthicalSimilar|3 years ago|reply
I think this is likely due to the nature of JavaScript’s whacky integer and float implementations.
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] 112233|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]