top | item 36980795

(no title)

bzax | 2 years ago

I believe the entire point of the ergodicity question here is "If you apply this process n times, with n approaching infinity, obviously the result may depend on what point in the n-times iterated distribution you sample, but if you choose a volume of vanishingly small measure to exclude, can you make a single concrete statement about what the process is doing without taking an expected value over the different outcomes"

And the answer is yes - with probability approaching 1 as n increases (ie excluding a portion of the distribution whose measure decreases to 0), the random process matches a deterministic process which is described by "you lose 5% each round".

discuss

order

jakell|2 years ago

Great description of a framing I hadn’t considered before, thanks!

bzax|2 years ago

I should admit I'm being very generous to Peters here - I came to the conclusion that this is what he means only because the math of ergodicity (https://en.wikipedia.org/wiki/Ergodic_theory#Ergodic_theorem...) talks a lot about "except on a set of measure zero". He provides no explanation of how he moves from "the time average of values in a particular run of the process" (which is ergodicity) to "what does a typical process round do, with probability 1" (which is perhaps what someone computing a utility function cares about).

I asked a friend who is an econ professor "Why does this Peters guy explain this so poorly" and his response was more or less, yes, all of economics has been wondering that too since he first published his Nature Physics paper on this a decade ago.