Simultaneous consideration of $N$ statistical decision problems having identical generic structure constitutes a compound decision problem. The risk of a compound decision problem is defined as the average risk of the component problems. When the component decisions are between two fully specified distributions $P_1$ and $P_2, P_1 \neq P_2$, Hannan and Robbins [5] give a decision function whose risk is uniformly close (for $N$ large) to the risk of a best "simple" procedure based on knowing the proportion of component problems in which $P_2$ is the governing distribution. This result was motivated by heuristic arguments and an example (component decisions between $\mathfrak{N}(-1, 1)$ and $\mathfrak{N}(1, 1)$ given by Robbins [8]. In both papers, the decision functions for the component problems depended on data from all $N$ problems. This paper generalizes and strengthens a result of Hannan and Robbins (Theorem 4, [5]) to the case where each component problem consists of making one of $n$ decisions based on an observation from one of $m$ distributions. Specifically, we find upper bounds for the difference in the risks (the regret function) of a certain compound procedure and a best "simple" procedure which is Bayes against the empirical distribution on the component parameter space. Theorem 2 gives sufficient conditions for a uniform (in parameter sequences) bound on the regret function of order $N^{-\frac{1}{2}}$, while Theorem 3 states sufficient conditions for a uniform bound of order $N^{-1}$. For $m = n = 2$, Theorem 2 furnishes a strengthening of Theorem 4 of [5]. More extensive results for the case $m = n = 2$ are given in a paper by Hannan and Van Ryzin [6]. Please note that the case considered here makes the $N$-decisions after the data from all $N$ problems are available. The sequential case ($k$th decision after observations $1, 2, \cdots, k, k = 1, \cdots, N$) is treated by Hannan in [3] and by Samuel in [10].