Convergence of Estimates Under Dimensionality Restrictions
LeCam, L.
Ann. Statist., Tome 1 (1973) no. 2, p. 38-53 / Harvested from Project Euclid
Consider independent identically distributed observations whose distribution depends on a parameter $\theta$. Measure the distance between two parameter points $\theta_1, \theta_2$ by the Hellinger distance $h(\theta_1, \theta_2)$. Suppose that for $n$ observations there is a good but not perfect test of $\theta_0$ against $\theta_n$. Then $n^{\frac{1}{2}}h(\theta_0, \theta_n)$ stays away from zero and infinity. The usual parametric examples, regular or irregular, also have the property that there are estimates $\hat{\theta}_n$ such that $n^{\frac{1}{2}}h(\hat{\theta}_n, \theta_0)$ stays bounded in probability, so that rates of separation for tests and estimates are essentially the same. The present paper shows that need not be true in general but is correct under certain metric dimensionality assumptions on the parameter set. It is then shown that these assumptions imply convergence at the required rate of the Bayes estimates or maximum probability estimates.
Publié le : 1973-01-14
Classification:  Bayes estimates,  maximum probability estimates,  rate of convergence
@article{1193342380,
     author = {LeCam, L.},
     title = {Convergence of Estimates Under Dimensionality Restrictions},
     journal = {Ann. Statist.},
     volume = {1},
     number = {2},
     year = {1973},
     pages = { 38-53},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1193342380}
}
LeCam, L. Convergence of Estimates Under Dimensionality Restrictions. Ann. Statist., Tome 1 (1973) no. 2, pp.  38-53. http://gdmltest.u-ga.fr/item/1193342380/