top | item 40267596

(no title)

toth | 1 year ago

You make a valid point, but I feel there is something in the direction the article is gesturing at...

The mean of the n-dimensional gaussian is an element of R^n, an unbounded space. There's no uninformed prior over this space, so there is always a choice of origin implicit in some way...

As you say, you can shrink towards any point and you get a valid James-Steiner estimator that is strictly better than the naive estimator. But if you send the point you are shrinking towards to infinity you get the naive estimator again. So it feels like the fact you are implicitly selecting a finite chunk of R^n around an origin plays a role in the paradox...

discuss

order

kgwgk|1 year ago

> But if you send the point you are shrinking towards to infinity you get the naive estimator again.

You get close to it but strictly speaking wouldn’t it always be better than the naive estimator?

toth|1 year ago

Right, it's a limit at infinity

rssoconnor|1 year ago

> There's no uninformed prior over this space, so there is always a choice of origin implicit in some way...

You could use an uninformed improper prior.

kgwgk|1 year ago

You would just need to come up with a way to pick a point at random uniformly from an unbounded space.