Invariance Theory and a Modified Minimax Principle
Wesler, Oscar
Ann. Math. Statist., Tome 30 (1959) no. 4, p. 1-20 / Harvested from Project Euclid
One of the unpleasant facts about statistical decision problems is that they are generally much too big or too difficult to admit of practical solutions, a fact that is threatening to widen even further the gap between the theory and application of this brave new discipline. Briefly, the situation is this. For each possible decision procedure $\varphi$, the statistician is concerned only with the values $\rho (\omega, \varphi)$ of the risk function as $\omega$ ranges over the set $\Omega$ of all possible states of nature, so that a choice of a decision procedure amounts to a choice of a risk function. The obvious difficulty of comparing functions in the search for a best procedure now arises, constituting a major problem for the statistician. The Bayes and minimax principles, it should be noted, represent but two extremes to which the statistician can go to get around this difficulty rather than to meet it, the one assuming complete knowledge of an a priori probability distribution $\lambda$ of the possible states $\omega$, the other assuming least favorable circumstances about $\omega$, so that in either case one considers only a single number per procedure rather than the entire function--the Bayesian the average risk with respect to $\lambda$, the minimax-man in all timidity the supremum of the risk function--comparisons thus becoming trivial in principle and obliging one to look simply for that procedure which minimizes these numbers. Inasmuch as the situations occurring in practice with regard to prior knowledge about $\omega$ usually lie between the two extremes just described, to this extent at least both principles are open to criticism. In view of the nature of the difficulty in choosing among procedures, the notion of admissible or complete classes of strategies is generally felt to provide the most satisfactory solution. Whereas it may be difficult to say what to do in a statistical decision problem, it is generally easier to say what not to do, so that the statistician separates out from consideration all inadmissible strategies and presents the practical man with what is at best a minimal essentially complete class of procedures. The choice of one from among these admissible procedures is then left to the best judgment, intuition, and past experience of the practical man. If the class is a small one, we have then achieved everything one can ask for, for the actual choice will then easily be made. The difficulty, however, is that the classes are usually much too big to be of real help. The trouble, in a word, lies in the fact that it is the space $\Omega$ itself of possible states that is generally too big: one simply cannot look at and assess all the values of $\omega$ for each decision function $\varphi$. Now when problems turn out to be too big for practical purposes, it is natural to look for ways of cutting them down to size by methods of simplification or approximation in which very little of the original problem is lost. It is precisely such a cutting down or slicing up of the problem that we propose to treat in this paper, in the hope that it may help bring the theory and practice of statistical decision functions somewhat closer together. The Modified Minimax Principle. The minimax principle, which looks only at the single value $\sup_{\omega\rho}(\omega, \varphi)$, suffers from the defect of being an over-simplification. Yet it suggests, by means of a simple modification, a natural way of approximating to the problem. This is to cut $\Omega$ up, to partition it into sets or "slices" $\Omega_s, s$ running over an index set $S$, and then look at $\sup_{\omega\varepsilon\Omega_s\rho}(\omega, \varphi) = \alpha(s, \varphi)$ for each $s$ in $S$, so that corresponding to the partition $\Omega = \cup_{s\varepsilon S}\Omega_s$ we look only at the values $\alpha(s, \varphi)$ for $s \varepsilon S$ instead of at $\rho(\omega, \varphi)$ for all $\omega$. The range of $\rho_\varphi$ is thus replaced by the smaller range of $\alpha_\varphi$, making comparisons of procedures that much easier. It is this slicing up of $\Omega$ (or its replacement by the smaller class $S$) and consequent simplification of the risk functions in the above way that we call the modified minimax principle. (By way of analogy, one might think of upper Darboux sums approximating Riemann integrals.) The reduced game can then be treated as any other game: one can play Bayes or minimax in it, or attempt to delineate its admissible strategies. If the "slicing principle" used is such that the supremum of $\rho_\varphi$ over each slice is not much different from the values within the slice, or has some other reasonable property, then very little is lost. The question of what are reasonable or natural slicing principles is clearly of primary importance here, and we shall present what we believe are several of them. The theory of invariance provides us with the most powerful of these slicing principles and will play a central role in our considerations. The natural slicing of $\Omega$ into its orbits under the group leads us to what appears to be the best possible generalization of the Hunt-Stein theorem, and to its most natural setting: namely to the theorem that under certain regularity conditions which are often met in practice the invariant procedures form a complete class in the sense of the sliced up risk functions. The theory of composite hypotheses is also discussed in this light, and an example of theoretical interest is given illustrating these concepts, in which a difficult problem undergoes a striking simplification. Finally, the use of previous experience as a slicing principle is discussed, and related to a purely game-theoretic model which we have constructed for the modified minimax principle and which we have called a mixed game. Though these slicing principles appear to be among the most important, the search for others continues. A method of approximation and simplification having been established, it remains to be seen whether these principles and available numerical methods can be combined to make an effective instrument in practice.
Publié le : 1959-03-14
Classification: 
@article{1177706355,
     author = {Wesler, Oscar},
     title = {Invariance Theory and a Modified Minimax Principle},
     journal = {Ann. Math. Statist.},
     volume = {30},
     number = {4},
     year = {1959},
     pages = { 1-20},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1177706355}
}
Wesler, Oscar. Invariance Theory and a Modified Minimax Principle. Ann. Math. Statist., Tome 30 (1959) no. 4, pp.  1-20. http://gdmltest.u-ga.fr/item/1177706355/