We consider estimation (subject to quadratic loss) of the vector of coefficients of a multiple linear regression model in which the error vector is assumed to have 0 mean and covariance matrix $\sigma^2 I$ but is not assumed to take on a specific parametric form, e.g., Normal. The vector of coefficients is taken to be randomly distributed according to some unknown prior. Restricted minimax solutions are exhibited relative to equivalence classes on the space of all prior probability distributions which group distributions with the same specified moments. In the context of the classic Empirical Bayes formulation, we determine restricted asymptotically optimal estimators--i.e., decision functions whose Bayes risks converge to the risk of the restricted minimax decision at each component stage.
@article{1193342385,
author = {Wind, Serge L.},
title = {An Empirical Bayes Approach to Multiple Linear Regression},
journal = {Ann. Statist.},
volume = {1},
number = {2},
year = {1973},
pages = { 93-103},
language = {en},
url = {http://dml.mathdoc.fr/item/1193342385}
}
Wind, Serge L. An Empirical Bayes Approach to Multiple Linear Regression. Ann. Statist., Tome 1 (1973) no. 2, pp. 93-103. http://gdmltest.u-ga.fr/item/1193342385/