A $k$-sided die is thrown $n$ times, to estimate the probabilities $\theta_1, \ldots, \theta_k$ of landing on the various sides. The MLE of $\theta$ is the vector of empirical proportions $p = (p_1, \ldots, p_k)$. Consider a set of Bayesians that put uniformly positive prior mass on all reasonable subsets of the parameter space. Their posterior distributions will be uniformly concentrated near $p$. Sharp bounds are given, using entropy. These bounds apply to all sample sequences: There are no exceptional null sets.