top | item 44226665

(no title)

antics | 8 months ago

So, the problem in its original framing is: can we simulate a fair coin flip with an unfair coin? As stated, I do actually think the von Neumann response answer ("this is actually technically possible") is fair, in that if I wanted a solution in O(1), I think I should have to say so ahead of time.

I suppose we'll have to disagree about whether this is incredible. The response shows that (1) this can be done at all, and (2) that the answer is exponentially likely as time goes on, not asymptotically, but for finite n. Incredible! You don't see finite-decay bounds very often! If you don't think that's incredible I invite you to ask a room full of people, even with the qualifications you deem appropriate, e.g., "solution does not need to be constant-time", or whatever.

discuss

order

zmgsabst|8 months ago

The difference is deterministic termination:

There is no N such that your algorithm is guaranteed to terminate before N flips — even for a single bit. The complexity class in the worst case (100% T) is infinite; it’s only the average cases that have something reasonable.

To borrow your phrasing:

If you only have a stochastic algorithm, I think you should have to say so.

antics|8 months ago

Kind of, kind of not. I think it's fair to argue a lot of theory people would argue that it is effectively guaranteed to terminate, for any reasonable definition of "guaranteed."

As I mention elsewhere, for a coin where P(H)=0.75, with 600 flips, the probability of not having received an answer is less than 1 divided by the number of atoms in the universe. While this is not 0, neither is the probability of the atoms of the coin lining up just right that it falls through the table, producing neither H or T. Theoretically and practically we do generally consider these kinds of things immaterial to the problem at hand, perhaps papering over them with terms-of-art like "with high/extremely high/arbitrarily high/ probability". They all mean roughly the same thing: this algorithm is "essentially guaranteed" to terminate after n flips.

geysersam|8 months ago

You can relax the constraints to bound the cost.

If I asked a someone gambling, does it matter if this coin is biased 0.0001%, they will probably say no. And just like that, the algorithm is now guaranteed to terminate.

It's a bit unfair to talk about the case (100% T) because in that case you don't have a source of randomness anymore, we've dropped the assumption that you have a source of randomness, and as expected you can't make one from nothing.

Dylan16807|8 months ago

> the worst case (100% T)

You're no longer randomly flipping a coin to get heads or tails at that point. There's a good argument that this is not within the original scenario.

thaumasiotes|8 months ago

Note that the solution dataflow objects to already operates in constant time on an average-case basis. There isn't room to make a complexity improvement.

dataflow|8 months ago

What do you mean by "exponentially, not asymptotically, but for finite n"? Exponential is by definition asymptotic and continues infinitely, no?

And to be clear, I'm not disagreeing (or agreeing) with the result being inherently incredible. I'm just saying it's not an incredible example of simulating a fair coin, because it just... isn't doing that. As an analogy: communicating with the Voyager spacecraft might be incredible, but it's not an incredible example of infinitely fast communication... because it just isn't. Telling me to go ask a room full of people whether they find either of these incredible is missing the point.

antics|8 months ago

> What do you mean by "exponentially, not asymptotically, but for finite n"? Exponential is by definition asymptotic and continues infinitely, no?

In statistics, we generally separate asymptotic bounds—which usually make guarantees only asymptotically—from finite-decay bounds like Chernoff, which decay exponentially not only in the limit, but also at each particular finite n. The second is much rarer and much more powerful.

This is an important distinction here: we are not talking about an asymptotic limit of coin flips. Each and every round of coin flips reduces the probability of not having an answer exponentially.

> I'm just saying it's not an incredible example of simulating a fair coin, because it just... isn't doing that. As an analogy: communicating with the Voyager spacecraft might be incredible, but it's not an incredible example of infinitely fast communication... because it just isn't.

Ok, no problem, it's up to you to decide if you want to use your own definition here.

But, FYI, in computability theory, it is definitely fair game and very common to "simulate" some computation with something that is either much more memory-intensive or compute-intensive, in either the deterministic or probabilistic case. For example, it's pretty common to add or remove memory and see whether the new "simulated" routine takes more or less compute power than the "standard" algorithm, and that is kind of what people are up to with the PSPACE stuff that is going on right now.

Using that lens—and this is entirely off the cuff, but in the 10 seconds of thinking, I believe probably mostly correct—this algorithm "simulates" a fair coin toss by using 1 extra bit of memory and O(p) compute, where p is the reciprocal of |0.5-<coin's bias>|. You can choose p to be infinite but you can do that for a sorting algorithm too (and that is why both sorting and this algorithm have DoS implications). Is this the best we can do? Well, that's the problem we're studying with randomness exractors: given this imperfect source of randomness, can we extract perfect randomness out of it at all?