Let $\{(X_i, \theta_i)\}$ be a sequence of independent random vectors where $X_i$ has a uniform density $U(0, \theta_i)$ for $0 < \theta_i < m (< \infty)$ and the unobservable $\theta_i$ are i.i.d. $G$ in some class $\mathscr{G}$ of prior distributions. In the $(n + 1)$st problem we estimate $\theta_{n + 1}$ by $t_n(X_1, \cdots, X_n, X_{n + 1}) \doteq t_n(\mathbf{X})$, incurring the risk $R_n \doteq \mathbf{E}(t_n(\mathbf{X}) - \theta_{n + 1})^2$, where $\mathbf{E}$ denotes expectation with respect to all random variables $\{(X_i, \theta_i)\}^{n + 1}_{i = 1}$. Let $R$ be the infimum Bayes risk with respect to $G$. In this paper the author exhibits empirical Bayes estimators with a convergence rate $O(n^{-1/2})$ of $R_n - R$ and shows that there is a sequence of empirical Bayes estimators for which $R_n - R$ has a lower bound of the same order $n^{-1/2}$.