We consider the problem of estimating the mean of a $p$-variate
normal distribution with identity covariance matrix when the mean lies in a
ball of radius $m$. It follows from general theory that dominating estimators
of the maximum likelihood estimator always exist when the loss is squared
error. We provide and describe explicit classes of improvements for all
problems $(m, p)$. We show that,for small enough $m$, a wide class of
estimators, including all Bayes estimators with respect to orthogonally
invariant priors, dominate the maximum likelihood estimator. When $m$ is not so
small, we establish general sufficient conditions for dominance over the
maximum likelihood estimator. These include, when $m \le \sqrt{p}$, the Bayes
estimator with respect to a uniform prior on the boundary of the parameter
space. We also study the resulting Bayes estimators for orthogonally invariant
priors and obtain conditions of dominance involving the choice of the prior.
Finally, these Bayesian dominance results are further discussed and illustrated
with examples, which include (1) the Bayes estimator for a uniform prior on the
whole parameter space and (2) a new Bayes estimator derived from an exponential
family of priors.
@article{1013699994,
author = {Marchand, \'Eric and Perron, Fran\c cois},
title = {Improving on the MLE of a bounded normal mean},
journal = {Ann. Statist.},
volume = {29},
number = {2},
year = {2001},
pages = { 1078-1093},
language = {en},
url = {http://dml.mathdoc.fr/item/1013699994}
}
Marchand, Éric; Perron, François. Improving on the MLE of a bounded normal mean. Ann. Statist., Tome 29 (2001) no. 2, pp. 1078-1093. http://gdmltest.u-ga.fr/item/1013699994/