(no title)
toth | 1 year ago
The mean of the n-dimensional gaussian is an element of R^n, an unbounded space. There's no uninformed prior over this space, so there is always a choice of origin implicit in some way...
As you say, you can shrink towards any point and you get a valid James-Steiner estimator that is strictly better than the naive estimator. But if you send the point you are shrinking towards to infinity you get the naive estimator again. So it feels like the fact you are implicitly selecting a finite chunk of R^n around an origin plays a role in the paradox...
kgwgk|1 year ago
You get close to it but strictly speaking wouldn’t it always be better than the naive estimator?
toth|1 year ago
rssoconnor|1 year ago
You could use an uninformed improper prior.
kgwgk|1 year ago