The Efficiency of Sequential Estimates and Wald's Equation for Sequential Processes
Wolfowitz, J.
Ann. Math. Statist., Tome 18 (1947) no. 4, p. 215-230 / Harvested from Project Euclid
Let $n$ successive independent observations be made on the same chance variable whose distribution function $f(x, \theta)$ depends on a single parameter $\theta$. The number $n$ is a chance variable which depends upon the outcomes of successive observations; it is precisely defined in the text below. Let $\theta^\ast(x_1, \cdots, x_n)$ be an estimate of $\theta$ whose bias is $b(\theta)$. Subject to certain regularity conditions stated below, it is proved that $\sigma^2(\theta^\ast) \geq \big(1 + \frac{db}{d\theta}\big)^2\big\lbrack EnE\big(\frac{\partial\log f}{\partial\theta}\big)^2\big\rbrack^{-1}.$ When $f(x, \theta)$ is the binomial distribution and $\theta^\ast$ is unbiased the lower bound given here specializes to one first announced by Girshick [3], obtained under no doubt different conditions of regularity. When the chance variable $n$ is a constant the lower bound given above is the same as that obtained in [2], page 480, under different conditions of regularity. Let the parameter $\theta$ consist of $l$ components $\theta_1, \cdots, \theta_l$ for which there are given the respective unbiased estimates $\theta^\ast_1(x_1, \cdots, x_n), \cdots, \theta^\ast_1(x_1, \cdots, x_n)$. Let $\|\lambda_{ij}\|$ be the non-singular covariance matrix of the latter, and $\|\lambda^{ij}\|$ its inverse. The concentration ellipsoid in the space of $(k_1, \cdots, k_l)$ is defined as $\sum_{i,j} \lambda^{ij}(k_i - \theta_i)(k_j - \theta_i) = l + 2.$ (This valuable concept is due to Cramer). If a unit mass be uniformly distributed over the concentration ellipsoid, the matrix of its products of inertia will coincide with the covariance matrix $\|\lambda)_{ij}\|$. In [4] Cramer proves that no matter what the unbiased estimates $\theta^\ast_1, \cdots, \theta^\ast_l$, (provided that certain regularity conditions are fulfilled), when $n$ is constant their concentration ellipsoid always contains within itself the ellipsoid $\sum_{i,j} \mu_{ij}(k_i - \theta_i)(k_j - \theta_j) = l + 2$ where $\mu_{ij} = nE\big(\frac{\partial\log f}{\partial\theta_i}\frac{\partial\log f}{\partial\theta_i}\big).$ Consider now the sequential procedure of this paper. Let $\theta^\ast_1, \cdots, \theta^\ast_l$ be, as before, unbiased estimates of $\theta_1, \cdots, \theta_l$, respectively, recalling, however, that the number of $n$ of observations is a chance variable. It is proved that the concentration ellipsoid of $\theta^\ast_1, \cdots, \theta^\ast_l$ always contains within itself the ellipsoid $\sum_{i,j} \mu'_{ij}(k_i - \theta_i)(k_j - \theta_j) = l + 2$ where $\mu'_{ij} = EnE\big(\frac{\partial\log f}{\partial\theta_i}\frac{\partial\log f}{\partial\theta_j}\big).$ When $n$ is a constant this becomes Cramer's result (under different conditions of regularity). In section 7 is presented a number of results related to the equation $EZ_n = EnEX$, which is due to Wald [6] and is fundamental for sequential analysis.
Publié le : 1947-06-14
Classification: 
@article{1177730439,
     author = {Wolfowitz, J.},
     title = {The Efficiency of Sequential Estimates and Wald's Equation for Sequential Processes},
     journal = {Ann. Math. Statist.},
     volume = {18},
     number = {4},
     year = {1947},
     pages = { 215-230},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1177730439}
}
Wolfowitz, J. The Efficiency of Sequential Estimates and Wald's Equation for Sequential Processes. Ann. Math. Statist., Tome 18 (1947) no. 4, pp.  215-230. http://gdmltest.u-ga.fr/item/1177730439/