top | item 40261550

Understanding Stein's Paradox (2021)

103 points| robertvc | 1 year ago |joe-antognini.github.io

48 comments

order

rssoconnor|1 year ago

I think the part on "How arbitrary is the origin, really?" is not correct. The origin is arbitrary. As the Wikipedia article points you you can pick any point, whether or not it is the origin, and use the James-Stein estimator to push your estimate towards that point and it will improve one's mean squared error.

If you pick a point to the left of your sample, then moving your estimate to the left will improve your mean squared error on average. If you pick a point to the right of your sample, then moving your estimate to the right will improve your mean squared error as well.

I'm still trying to come to grips with this, and below is conjecture on my part. Imagine sampling many points from a 3-D Gaussian distribution (with identity covariance), making a nice cloud of points. Next choose any point P. P could be close to the cloud or far away, it doesn't matter. No matter which point P you pick, if you adjust all the points from your cloud of samples in accordance to this James-Stein formula, moving them all towards your chosen point P by various amounts, then, on average they will move closer to the center of your Gaussian distribution. This happens no matter where P is.

The cloud is, of course, centered around the center of the Gaussian distribution. As the points are pulled towards this arbitrary point P some will be pulled away from the the center of Gaussian, some are pulled towards the center, and some are squeezed so that they are pulled away from the center in the paralled direction, but squeezed closer in the perpendicular direction. Anyhow, apparently everything ends up, on average, closer to the center of the Gaussian in the end.

I'm not entirely sure what to make of this result. Perhaps it means that mean squared error is a silly error metric?

titanomachy|1 year ago

Your visualization helped me understand this! If the center of the distribution is far from P, then all the lines from P to the points in your cluster are basically parallel, and you just shift your point cluster which doesn’t help your estimate. But if P is close to the mean, then it sits near the middle of your cluster, so pulling all points towards P is “shrinking” the cluster more than “shifting” it.

m16ghost|1 year ago

Here are some links that might help visualize what is going on:

https://www.naftaliharris.com/blog/steinviz/

https://www.youtube.com/watch?v=cUqoHQDinCM (this video actually references the original post)

My takeaway is that the volume of points which get worse as they are pulled towards point P exists in some region R. As the number of dimensions increase, region R's volume shrinks as a % of the total cloud volume, making it much more unlikely that a sample is pulled from that region. In other words, you are more likely to sample points which move closer to the center than move away, which is why the estimator is an improvement on average.

toth|1 year ago

You make a valid point, but I feel there is something in the direction the article is gesturing at...

The mean of the n-dimensional gaussian is an element of R^n, an unbounded space. There's no uninformed prior over this space, so there is always a choice of origin implicit in some way...

As you say, you can shrink towards any point and you get a valid James-Steiner estimator that is strictly better than the naive estimator. But if you send the point you are shrinking towards to infinity you get the naive estimator again. So it feels like the fact you are implicitly selecting a finite chunk of R^n around an origin plays a role in the paradox...

mitthrowaway2|1 year ago

Sorry, I'm siding with the physicists here. If you're going to declare that your seemingly arbitrary choice of coordinate system is actually not arbitrary and part of your prior information about where the mean of the distribution is suspected to be, you have to put that in the initial problem statement.

Sharlin|1 year ago

You can put the origin anywhere and for almost all choices the adjustment is almost zero. But if the choice happens to be very close to the sample point, against all (prior) probabilities, then that fact affects the prior.

credit_guy|1 year ago

Stein's paradox is bogus. Somebody needs to say that.

Here's one wikipedia example:

  > Suppose we are to estimate three unrelated parameters, such as the US wheat yield for 1993, the number of spectators at the Wimbledon tennis tournament in 2001, and the weight of a randomly chosen candy bar from the supermarket. Suppose we have independent Gaussian measurements of each of these quantities. Stein's example now tells us that we can get a better estimate (on average) for the vector of three parameters by simultaneously using the three unrelated measurements. 
Here's what's bogus about this: the "better estimate (on average)" is mathematically true ... for a certain definition of "better estimate". But whatever that definition is, it is irrelevant to the real world. If you believe you get a better estimate of the US wheat yield by estimating also the number of Wimbledon spectators and the weight of a candy bar in a shop, then you probably believe in telepathy and astrology too.

wavemode|1 year ago

(Disclaimer, stats noob here) - I thought the point was that, you have a better chance of being -overall- closer to the mean (i.e., the 3D euclidean distance between your guess and the mean would be the smallest, on average), even though you may not necessarily have improved your odds of guessing any of the single individual means.

So it's not that "you get a better estimate of the US wheat yield by estimating also the number of Wimbledon spectators and the weight of a candy bar in a shop", it's simply that you get a better estimate for the combined vector of the three means. (Which, in this case, the vector of the three means is probably meaningless, since the three data sets are entirely unrelated. But we could also imagine scenarios where that vector is meaningful.)

Am I misunderstanding something?

zeroonetwothree|1 year ago

You are correct in that the combined estimator is actually worse at estimating an individual value. Its only better if you specifically care about the combination (which you probably don’t in this contrived example)

mmmmpancakes|1 year ago

No it is not bogus, you just don't know much stats apparently.

jprete|1 year ago

My intuition is that the problem is in using squares for the error. The volume of space available for a given distance of error in 3-space is O(N^3) the magnitude of the error, so an error term of O(N^2) doesn't grow fast enough compared to the volume that can contain that magnitude of error.

But I really don't know, it's just an intuition with no formalism behind it.

hyperbovine|1 year ago

Had a bit of a chuckle at the very-2024 definition of the Stein shrinkage estimator:

\hat{mu} = ReLU(…)

toth|1 year ago

Ditto.

I think that ship has sailed, but I think it's unfortunate that "ReLU(x)" became a popular notation for "max(0,x)". And using the name "rectified linear unit" for basically "positive part" seems like a parody, like insisting on calling water "dihydrogen monoxide".

pfortuny|1 year ago

I do not get it: if variance is too large, a random sample is very little representative of the mean. As simple as that?

Now the specific formula may be complicated. But otherwise I do not understand the “paradox”? Or am I missing something?

hcks|1 year ago

In the 1D case the single point will be the best estimator for the mean no matter what the variance is

zeroonetwothree|1 year ago

I don’t understand the picture with the shaded circle. Sure the area to the left is smaller, but it also is more likely to be chosen because in a Gaussian values closer to the mean are more likely. So the picture alone doesn’t prove anything.

rssoconnor|1 year ago

In the diagram the mean of the distribution is the center of the circle.

Of the set of samples a fixed distance d from the mean of the distribution, strictly less than half of them will be closer to the origin than the mean is, and strictly greater than half of them will be further from the the origin than the mean. This is true for all values of d > 0, so the result holds for all samples.

fromMars|1 year ago

Can someone confirm the validity of the section called ”Can we derive the James-Stein estimator rigorously?"?

The claim that the best estimator must be smooth seemed surprising to me.

moi2388|1 year ago

What a great read!

TibbityFlanders|1 year ago

I'm horrible at stats, but is this saying that if I have 5 jars of pennies, and I guess the amount in each one. Then I find the average of all my guesses, and the variance between the guesses, then I can adjust each guess to a more likely answer with this method?

kgwgk|1 year ago

Not necessarily "more likely" but "better" in some "loss" sense.

It could be "more likely" in the jars example where estimates may convey some relevant information for each other. But consider this example from wikipedia:

"Suppose we are to estimate three unrelated parameters, such as the US wheat yield for 1993, the number of spectators at the Wimbledon tennis tournament in 2001, and the weight of a randomly chosen candy bar from the supermarket. Suppose we have independent Gaussian measurements of each of these quantities. Stein's example now tells us that we can get a better estimate (on average) for the vector of three parameters by simultaneously using the three unrelated measurements."

https://en.wikipedia.org/wiki/Stein%27s_example#Example

eru|1 year ago

No, I don't think these problems are related.