The Minimum Distance Method
Wolfowitz, J.
Ann. Math. Statist., Tome 28 (1957) no. 4, p. 75-88 / Harvested from Project Euclid
The present paper gives the formal statements and proofs of the results illustrated in [1]. In a series of papers ([2], [3], [4]) the present author has been developing the minimum distance method for obtaining strongly consistent estimators (i.e., estimators which converge with probability one). The method of the present paper is much superior, in simplicity and generality of application, to the methods used in the papers [2] and [4] cited above. Roughly speaking, the present paper can be summarized by saying that, in many stochastic structures where the distribution function (d.f.) depends continuously upon the parameters and d.f.'s of the chance variables in the structure, those parameters and d.f.'s which are identified (uniquely determined by the d.f. of the structure) can be strongly consistently estimated by the minimum distance method of the present paper. Since identification is obviously a necessary condition for estimation by any method, it follows that, in many actual statistical problems, identification implies estimatability by the method of the present paper. Thus problems of long standing like that of Section 5 below are easily solved. For this problem the whole canonical complex (Section 6 below; see [1]) has never, to the author's knowledge, been estimated by any other method. The directional parameter of the structure of Section 4 seems to be here estimated for the first time. As the identification problem is solved for additional structures it will be possible to apply the minimum distance method. The proofs in the present paper are of the simplest and most elementary sort. In Section 8 we treat a problem in estimation for nonparametric stochastic difference equations. Here the observed chance variables are not independent, but the minimum distance method is still applicable. The treatment is incomparably simpler than that of [4], where this and several other such problems are treated. The present method can be applied to the other problems as well. Application of the present method is routine in each problem as soon as the identification question is disposed of. In this respect it compares favorably with the method of [4], whose application was far from routine. As we have emphasized in [1], the present method can be applied with very many definitions of distance (this is also true of the earlier versions of the minimum distance method). The definition used in the present paper has the convenience of making a certain space conditionally compact and thus eliminating the need for certain circumlocutions. Since no reason is known at present for preferring one definition of distance to another we have adopted a convenient definition. It is a problem of great interest to decide which, if any, definition of distance yields estimators preferable in some sense. The definition of distance used in this paper was employed in [9]. As the problem is formulated in Section 2 below (see especially equation (2.1), the "observed" chance variables $\{X_i\}$ are known functions (right members of (2.1)) of the "unobservable" chance variables $\{Y_i\}$ and of the unknown constants $\{\theta_i\}$. In the problems treated in [3], [9], and [11], it is the distribution of the observed chance variables which is a known function of unobservable chance variables and of unknown constants, and not the observed chance variables themselves. However, the latter problems can easily be put in the same form as the former problems. Moreover, in the method described below the values of the observed chance variables are used only to estimate the distribution function of the observed chance variables (by means of the empiric distribution function). Consequently there is no difference whatever in the treatment of the problems by the minimum distance method, no matter how the problems are formulated. The unobservable chance variables $\{Y_i\}$ correspond to what in [11] are called "incidental parameters"; the unknown constants $\{\theta_i\}$ are called in [11] "structural parameters". In [9] there is a discussion of the fact that in some problems treated in the literature the incidental parameters are considered as constants and in other problems as chance variables. In contradistinction to the present paper [3] (in particular its Section 5) treats the incidental parameters as unknown constants. The fundamental idea of both papers is the same: The estimator is chosen to be such a function of the observed chance variables that the d.f. of the observed chance variables (when the estimator is put in place of the parameters and distributions being estimated) is "closest" to the empiric d.f. of the observed chance variables. The details of application are perhaps easier in the present paper; the problems are different and of interest per se.
Publié le : 1957-03-14
Classification: 
@article{1177707038,
     author = {Wolfowitz, J.},
     title = {The Minimum Distance Method},
     journal = {Ann. Math. Statist.},
     volume = {28},
     number = {4},
     year = {1957},
     pages = { 75-88},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1177707038}
}
Wolfowitz, J. The Minimum Distance Method. Ann. Math. Statist., Tome 28 (1957) no. 4, pp.  75-88. http://gdmltest.u-ga.fr/item/1177707038/