The Sequential Compound Decision Problem with $m \times n$ Finite Loss Matrix
Ryzin, J. Van
Ann. Math. Statist., Tome 37 (1966) no. 6, p. 954-975 / Harvested from Project Euclid
Consideration of a sequence of statistical decision problems having identical generic structure constitutes a sequential compound decision problem. The risk of a sequential compound decision problem is defined as the average risk of the component problems. In the case where the component decisions are between two fully specified distributions $P_1$ and $P_2, P_1 \neq P_2$, Samuel (Theorem 2 of [9]) gives a sequential decision function whose risk is bounded from above by the risk of a best "simple" procedure based on knowing the proportion of component problems in which $P_2$ is the governing distribution plus a sequence of positive numbers converging to zero uniformly in the space of parameter-valued sequences as the number of problems increases. Related results are abstracted by Hannan in [2] for the sequential compound decision problem where the parameter space in the component problem is finite. The decision procedures in both instances rely on the technique of "artificial randomization," which was introduced and effectively used by Hannan in [1] for sequential games in which player I's space is finite. In the game situation such randomization is necessary. However, in the compound decision problem such "artificial randomization" is not necessary as is shown in this paper. Specifically, we consider the case where each component problem consists of making one of $n$ decisions based on an observation from one of $m$ distributions. Theorems 4.1, 4.2, and 4.3 give upper bounds for the difference in the risks (the regret function) of certain sequential compound decision procedures and a best "simple" procedure which is Bayes against the empirical distribution on the component problem parameter space. None of the sequential procedures presented depend on "artificial randomization." The upper bounds in these three theorems are all of order $N^{-\frac{1}{2}}$ and are uniform in the parameter-valued sequences. All procedures depend at stage $k$ on substitution of estimates of the $k - 1$st (or $k$th) stage empirical distribution $p_{k-1}$ (or $p_k$) on the component parameter space into a Bayes solution of the component problem with respect to $p_{k-1}$ (or $p_k$). Theorem 4.1 (except in the case where the estimates are degenerate) and Theorem 4.3 when specialized to the compound testing case between $P_1$ and $P_2$ (Theorems 5.1 and 5.2) yield a threefold improvement of Samuel's results mentioned above by simultaneously eliminating the "artificial randomization," by improving the convergence rate of the upper bound of the regret function to $N^{-\frac{1}{2}}$, and by widening the class of estimates. Higher order uniform bounds on the regret function in the sequential compound testing problem are also given. The bounds in Theorems 5.3 and 5.4 (or Theorems 5.5 and 5.6) are respectively of $O((\log N)N^{-1})$ and $o(N^{-\frac{1}{2}})$ and are attained by imposing suitable continuity assumptions on the induced distribution of a certain function of the likelihood ratio of $P_1$ and $P_2$. Theorem 6.1 extends Theorems 4.1, 4.2, and 4.3 to the related "empirical Bayes" problem. Also lower bounds of equivalent or better order are given for all theorems. The next section introduces notation and preliminaries to be used in this paper and in the following paper [15].
Publié le : 1966-08-14
Classification: 
@article{1177699376,
     author = {Ryzin, J. Van},
     title = {The Sequential Compound Decision Problem with $m \times n$ Finite Loss Matrix},
     journal = {Ann. Math. Statist.},
     volume = {37},
     number = {6},
     year = {1966},
     pages = { 954-975},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1177699376}
}
Ryzin, J. Van. The Sequential Compound Decision Problem with $m \times n$ Finite Loss Matrix. Ann. Math. Statist., Tome 37 (1966) no. 6, pp.  954-975. http://gdmltest.u-ga.fr/item/1177699376/