This paper is a continuation of [8], and considers the sequential compound decision problem for the case where the component decisions are of the simple versus simple hypothesis testing type, and thus can be stated in terms of testing whether $\theta = 0$ or $\theta = 1$. The loss for the compound decision is taken to be the average of the losses in the component decisions, and the risk for the compound decision is defined correspondingly. Let $R( )$ denote the Bayes envelope function of the component problem. In [8] two sequences of compound decision rules $\{T^\ast_n\}$ and $\{\hat T_n\}$ are exhibited, such that for $n$ sufficiently large, the risk incurred by $\hat{T}_n$ never exceeds $R(\vartheta_n) + \epsilon$ where $\vartheta_n$ is the average of the true $\theta$-values in the $n$ first components, and this holds uniformly in all possible sequences of $\theta$'s: for $T^\ast_n$ a corresponding statement is valid provided $R( )$ is differentiable for all $0 \leqq \eta \leqq 1$. Here we prove that for any sequence of $\theta$-values, the difference between the loss incurred by $\hat{T}_n$ and $R(\vartheta_n)$ converges to zero in probability, and under the differentiability assumption a corresponding statement holding with probability one is proved for $T^\ast_n$. Numerical data is provided to indicate the rate of convergence.