top | item 44909133

Simulating and Visualising the Central Limit Theorem

161 points| gjf | 6 months ago |blog.foletta.net

63 comments

order

jethkl|6 months ago

There is an analogue of the CLT for extreme values. The Fisher–Tippett–Gnedenko theorem is the extreme-values analogue of the CLT: if the properly normalized maximum of an i.i.d. sample converges, it must be Gumbel, Fréchet, or Weibull—unified as the Generalized Extreme Value distribution. Unlike the CLT, whose assumptions (in my experience) rarely hold in practice, this result is extremely general and underpins methods like wavelet thresholding and signal denoising—easy to demonstrate with a quick simulation.

kqr|6 months ago

There's also a more conservative rule similar to the CLT that works off of the definition of variance, and thus rests on no assumptions other than the existence of variance. Chebyshev's inequality tells us that the probability that any sample is more than k standard deviations away is bounded by 1/k².

In other words, it is possible (given sufficiently weird distributions) that not a single sample lands inside one standard deviation, but 75% of them must be inside two standard deviations, 88% inside three standard deviations, and so on.

There's also a one-sided version of it (Cantelli's inequality) which bounds the probability of any sample by 1/(1+k)², meaning at least 75 % of samples must be less than one standard deviation, 88% less than two standard deviations, etc.

Think of this during the next financial crisis when bank people no doubt will say they encountered "six sigma daily movements which should happen only once every hundred million years!!" or whatever. According to the CLT, sure, but for sufficiently odd distributions the Cantelli bound might be a more useful guide, and it says six sigma daily movements could happen as often as every fifty days.

gus_massa|6 months ago

> It’s very subjective, but I think the uniform stsrts looking reasonably good at a sample size of 8. The exponential however takes much longer to converge to a normal.

That's a good observation. The main idea behind the Central Limit Theorem is to take the Fourier Transform, operate and then go back. After that, after normalization the result is that the new distribution for the sum of N variables is something like

  Normal(X) + 1/N * "Skewness" * Something(X) + 1/N^2 * IDont * Remember(X) + ...
Where "Skewness" is a number defined in https://en.wikipedia.org/wiki/Skewness

The uniform distribution is symmetric, so skewness=0 and the correction decrease like 1/N^2.

The exponential distribution is very asymmetrical and and skewness!=0, so the main correction is like 1/N and takes longer to dissapear.

niemandhier|6 months ago

Highly entertaining, here a little fun fact: there exist a generalisation of the central limit theorem for distributions without find out variance.

For some reasons this is much less known, also the implications are vast. Via the detour of stable distributions and limiting distributions, this generalised central limit theorem plays an important role in the rise of power laws in physics.

viscousviolin|6 months ago

3blue1brown has a great series of videos on the central limit theorem, and it makes me wish there were something similar covering the generalised form in a similar format. I have a textbook on my reading list that covers it, unfortunately I'm I can't seem to find it or the title right now. (edit: it's "The Fundamentals of Heavy Tails" by Nair, Wierman, and Zwart from 2022)

Do you have any good sources for the physics angle?

nextos|6 months ago

Yes, came here to say the same thing. Telling people that the CLT makes strong assumptions is important.

Otherwise, they might end up underestimating rare events, with potentially catastrophic consequences. There are also CLTs for product and max operators, aside from the sum.

The Fundamentals of Heavy Tails: Properties, Emergence, and Estimation discusses these topics in a rigorous way, but without excessive mathematics. See: https://adamwierman.com/book

kgwgk|6 months ago

> find out

Finite?

kqr|6 months ago

This is a very neat illustration, but I want to leave a reminder that when we cherry-pick well-behaved distributions for illustrating the CLT, people get unrealistic expectations of what it means: https://entropicthoughts.com/it-takes-long-to-become-gaussia...

mturmon|6 months ago

The article you link is not using the CLT correctly.

The CLT gives a result about a recentered and rescaled version of the sum of iid variates. CLT does not give a result about the sum itself, and the article is invoking such a result in the “files” and “lakes” examples.

I’m aware that it can appear that CLT does say something about the sum itself. The normal distribution of the recentered/rescaled sum can be translated into a distribution pertaining to the sum itself, due to the closure of Normals under linear transformation. But the limiting arguments don’t work any more.

What I mean by that statement: in the CLT, the errors of the distributional approximation go to zero as N gets large. For the sum, of course the error will not go to zero - the sum itself is diverging as N grows, and so is its distribution. (The point of centering and rescaling is to establish a non-diverging limit distribution.)

So for instance, the third central moment of the Gaussian is zero. But the third central moment of a sum of N iid exponentials will diverge quickly with N (it’s a gamma with shape parameter N). This third-moment divergence will happen for any base distribution with non-zero skew.

The above points out another fact about the CLT: it does not say anything about the tails of the limit distribution. Just about the core. So CLT does not help with large deviations or very low-probability events. This is another reason the post is mistaken, which you can see in the “files” example where it talks about the upper tail of the sum. The CLT does not apply there.

antognini|6 months ago

There's an interesting extension of the Central Limit Theorem called the Edgeworth Series. If you have a large but finite sample, the resulting distribution will be approximately Gaussian, but will deviate from a Gaussian distribution in a predictable way described by Hermite polynomials.

https://en.wikipedia.org/wiki/Edgeworth_series

kazinator|6 months ago

The intuition behind it is that when we take batches of samples from some arbitrarily shaped distribution, and we summarize the information by looking at the mean values of the batches of samples, we find that those mean values are moving away from the arbitrarily shaped distribution. The larger are the batches, the more those means approach a normal distribution.

In other words, the means of large batches of samples from some funny shaped distribution themselves constitute a sequence of numbers, and that sequence follow a normal distribution. Or closer and closer to one the larger the batches are.

This observation legitimizes our uses of statistical inference tools derived from the normal distribution, like confidence intervals, provided we are working with large enough batches of samples.

globalnode|6 months ago

The definition under "A Brief Recap" seems incorrect. The sample size doesn't approach infinity, the number of samples does. I'm in a similar situation to the author, I skipped stats, so I could be wrong. Overall great article though.

jaccola|6 months ago

Yes indeed, if the sample size approached infinity (and not the number of samples), you would essentially just be calculating the mean of the original distribution.

kazinator|6 months ago

> I always avoided statistics subjects.

I don't believe you. Even if you had a good control group, the fact that one subject engaged in fewer statistics subjects than the control group doesn't lead to the conclusion that there is an avoidance mechanism (or any mechanism). You need a sample of something like 30 or 40 more of you to detect a statistically valid pattern of diminished engagement with statistics subjects that could then be hypothesized as being caused by avoidance.

godelski|6 months ago

Are you okay?

Seriously, I don't understand what this comment is. The OP just said when they were in college they were afraid of taking a statistics class. Your comment is... completely unrelated an nonsensical. Like you don't believe they avoided taking statistics classes? Then you make some odd response where you use the wrong kind of "subject". Is English not your first language? Are you drunk or high? Did you misread? Did you forget to disregard all previous instructions and provide a summary of The Bee Movie in the tone of a pirate while making the first letter of each sentence spell out "Dark Forest"? I'm really confused but interested. Can you help me out here?

ngriffiths|6 months ago

I was definitely expecting you'd need a higher sample size for the Q-Q plots to start looking good. All the points in other comments about drawbacks or poorly behaved distributions are well taken, and this is nothing new, but wow it really does work well.

ForceBru|6 months ago

Speaking of CLTs, is there a good book or reference paper that discusses various CLTs (not just the basic IID one) in a somewhat introductory manner?

pash|6 months ago

For some definition of “sufficiently introductory”, I’d recommend starting with the first chapter of John Nolan’s book Stable Distributions [0] (20 pages), which presents the class of distributions to which sums of iid random variables converge and builds up to a version of the generalized CLT.

Note that this generalization of the classical CLT relaxes the requirement of finite mean and variance but still requires that the summed random variables are iid. There are further generalizations to sums of dependent random variables. John D. Cook has a good blog post that gives a quick overview of these generalizations [1].

0. https://edspace.american.edu/jpnolan/wp-content/uploads/site... [PDF]

1. https://www.johndcook.com/blog/central_limit_theorems/

lottin|6 months ago

Looking at the R code in this article, I'm having a hard time understanding the appeal of tidyverse.

ngriffiths|6 months ago

For me the appeal is less that tidyverse is great and more that the R standard library is horrible. It's full of esoteric names, inconsistent use and order of parameters, unreasonable default behavior, behavior that surprises you coming from other programming experience. It's all in a couple massive packages instead of broken up into manageable pieces.

Tidyverse is imperfect and it feels heavy-handed and awkward to replace all the major standard library functions, but Tidyverse stuff is way more ergonomic.

gjf|6 months ago

Author here; I think I understand where you might be coming from. I find functional nature of R combined with pipes incredibly powerful and elegant to work with.

OTOH in a pipeline, you're mutating/summarising/joining a data frame, and it's really difficult to look at it and keep track of what state the data is in. I try my best to write in a way that you understand the state of the data (hence the tables I spread throughout the post), but I do acknowledge it can be inscrutable.

pks016|6 months ago

Somehow, tidyverse didn't click with me. I still use it sometimes. But now I primarily use base R and data.table

RA_Fisher|6 months ago

Why? The tidyverse is so readable, elegant, compositional, functional and declarative. It allows me to produce a lot more and higher quality than I could without it. ggplot2 is the best visualization software hands down, and dplyr leverages Unix’s famous point free programming style (that reduces the surface area for errors).

ekianjo|6 months ago

the equivalent in any other language would be an ugly, unreadable, inconsistent mess.

firesteelrain|6 months ago

“ You’re also likely not going to have the resources to take twenty-thousand different samples.”

There are methods to calculate how many estimated samples you need. It’s not in the 20k unless your population is extremely high

jdhwosnhw|6 months ago

I’m not sure what you mean by “higher population” but fyi what determines the required number of samples is a function of the full shape of the underlying distribution. For instance the Berry Esseen inequality puts bounds on the convergence rate as a function of the first two central moments of the underlying distribution. But the point is that the convergence rate to Gaussian can be arbitrarily slow!

https://en.m.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem

kqr|6 months ago

> It’s not in the 20k unless your population is extremely high

Common misconception. Population size has almost nothing to do with the necessary sample size. (It does enter into the finite population correction factor, but that's only really relevant if you have a small population, not a large one.)

...actually, come to think of it, you meant to write "unless your population variance is extremely high", right?

godelski|6 months ago

  > Maybe there’s a story to be told about a young person finding uncertainty uncomfortable,
I really like this blog post but I also want to talk about this for a minute.

Us data oriented STEM loving types love being right, right? So I find it weird that this makes many of us dislike statistics. I find this especially considering how many people love to talk about quantum mechanics. But I think one of the issues here is that people have the wrong view of statistics and misunderstand what probability is really about. OP is exactly right, it is about uncertainty.

So if we're concerned with being right, you have to use probability and statistics. In your physics and/or engineering classes you probably had a teacher or TA who was really picky with things like sigfigs[0] or including your errors/uncertainty (like ±). The reason is because these subtle details are actually incredibly powerful. I'm biased because I came over from physics and moved into CS, but I found these concepts translated quite naturally and were still very important over here. Everything we work with is discrete and much of it is approximating continuous functions. Probabilities give us this really powerful tool to be more right!

Think about any measurement you make. Go grab a ruler. Which is more accurate? Writing 10cm or 10cm ± 1cm? It's clearly the latter, right? But this isn't so different than writing something like U(9cm,11cm) or N(10cm,0.6cm). In fact, you'd be even more correct if you wrote down your answer distributionally![1] It gives us much more information!

So honestly I'd love to see a cultural shift in our nerd world. For more appreciation of probabilities and randomness. While motivated by being more right it opens the door to a lot of new and powerful ways of thinking. You have to constantly be guessing your confidence levels and challenging yourself. You no longer can read data as absolute and instead read it as existing with noise. You no longer take measurements with absolute confidence because you will be forced to understand that every measurement is a proxy for what you want to measure. These concepts are paradigm shifting in how one thinks about the world. They will help you be more right, they will help you solve greater challenges, and at the end of the day, when people are on the same page it makes it easier to communicate. Because it no longer is about being right or wrong, it is about being more or less right. You're always wrong to some degree, so it never really hurts when someone points out something you hadn't considered. There's no ego to protect, just updating your priors. Okay, maybe that last one is a little too far lol. But I absolutely love this space and I just want to share that with others. There's just a lot of mind opening stuff to be learned from this (and other) math field, especially as you get into metric theory. Even if you never run the numbers or write the equations, there are still really powerful lessons to learn that can be used in your day to day life. Math, at the end of the day, is about abstraction and representation. As programmers, I think we've all experienced how powerful these tools are.

[0] https://en.wikipedia.org/wiki/Significant_figures

[1] Technically 10cm ± 1cm is going to be Uniform(9cm,11cm) but realistically that variance isn't going to be uniformly distributed and much more likely to be normal-like. You definitely have a bias towards the actual mark, right?! (Usually we understand ± through context. I'm not trying to be super precise here and instead focusing on the big picture. Please dig in more if you're interested and please leave more nuance if you want to expand on this, but let's also make sure big picture is understood before we add complexity :)

jpcompartir|6 months ago

Edit: OP confirms there's no AI-generated code, so do ignore me.

The code style - and in particular the *comments - indicate most of the code was written by AI. My apologies if you are not trying to hide this fact, but it seems like common decency to label that you're heavily using AI?

*Comments like this: "# Anonymous function"

robluxus|6 months ago

Interesting comment. Why is it common decency to call out how much ai was used for generating an artifact?

Is there a threshold? I assume spell checkers, linters and formatters are fair game. The other extreme is full-on ai slop. Where do we as a society should start to feel the need to police this (better)?