Simultaneous consideration of $n$ statistical decision problems having identical generic structure constitutes a compound decision problem. The risk of a compound decision problem is defined as the average risk of the component problems. When the component decisions are between two fully specified distributions $P_0$ and $P_1, P_0 \neq P_1$, Hannan and Robbins [2] give a decision function whose risk is uniformly close (for $n$ large) to the risk of the best "simple" procedure based on knowing the proportion of component problems in which $P_1$ is the governing distribution. This result was motivated by heuristic arguments and an example (component decisions between $N(-1, 1)$ and $N(1, 1))$ given by Robbins [4]. In both papers, the decision functions for the component problems depended on data from all $n$ problems. The present paper considers, as in Hannan and Robbins [2], compound decision problems in which the component decisions are between two distinct completely specified distributions. The decision functions considered are those of [2]. The improvement is in the sense that a convergence order of the bound is obtained in Theorem 1. Higher order bounds are attained in Theorems 2 and 3 under certain continuity assumptions on the induced distribution of a suitably chosen function of the likelihood ratio of the two distributions.