Optimum Classification Rules for Classification into Two Multivariate Normal Populations
Gupta, S. Das
Ann. Math. Statist., Tome 36 (1965) no. 6, p. 1174-1184 / Harvested from Project Euclid
The problem of classifying an observation into one of two multivariate normal populations with the same covariance matrix has been thoroughly discussed by Anderson [1] when the populations are completely known. Anderson and Bahadur [2] treated the case of different and known covariance matrices and obtained the minimal complete class restricting to linear classification rules. Wald [11], Anderson [1], and Rao [7] suggested some heuristic classification rules based on the sample estimates of the unknown parameters of the two normal distributions having the same covariance matrix. One of these heuristic rules is the maximum likelihood rule (ML rule) which classifies the observation into the population $\Pi_i$ if the maximum likelihood (likelihood maximized under the variation of the unknown parameters) obtained under the assumption that the observation to be classified comes from $\Pi_i$ is greater than the corresponding maximum likelihood assuming that the observation comes from $\Pi_j(i \neq j; i, j = 1, 2)$. Sitgreaves [8], [9], and John [5] obtained the explicit forms of the distributions of the classification statistics proposed by Anderson and Wald. Many other papers in this line are included in the book cited in the reference [9]. Ellison [4] derived a class of admissible rules which includes the ML rule for the problem of classification into more than two normal populations with different and known covariance matrices. Cacoullos [3] obtained an invariant Bayes rule and an admissible minimax rule for the problem of selecting one out of a finite number of completely specified normal populations which is "closest" to a given normal population whose covariance matrix is unknown. It will be shown in this paper that the ML rule is an unbiased admissible minimax rule when the common covariance matrix of the two normal populations is known; and, when the common covariance matrix is unknown, that the corresponding ML rule is unbiased and is an admissible minimax rule in an invariant class. The loss function in each problem is assumed to be a function (satisfying some mild restrictions) of the Mahalanobis distance between the two populations.
Publié le : 1965-08-14
Classification: 
@article{1177699990,
     author = {Gupta, S. Das},
     title = {Optimum Classification Rules for Classification into Two Multivariate Normal Populations},
     journal = {Ann. Math. Statist.},
     volume = {36},
     number = {6},
     year = {1965},
     pages = { 1174-1184},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1177699990}
}
Gupta, S. Das. Optimum Classification Rules for Classification into Two Multivariate Normal Populations. Ann. Math. Statist., Tome 36 (1965) no. 6, pp.  1174-1184. http://gdmltest.u-ga.fr/item/1177699990/