In the discrimination problem the random variable $\theta$, known to take values in $\{1,\cdots, M\}$, is estimated from the random vector $X$. All that is known about the joint distribution of $(X, \theta)$ is that which can be inferred from a sample $(X_1, \theta_1),\cdots, (X_n, \theta_n)$ of size $n$ drawn from that distribution. A discrimination rule is any procedure which determines a decision $\hat{\theta}$ for $\theta$ from $X$ and $(X_1, \theta_1),\cdots, (X_n, \theta_n)$. A rule is called $k$-local if the decision $\hat{\theta}$ depends only on $X$ and the pairs $(X_i, \theta_i)$ for which $X_i$ is one of the $k$-closest to $X$ from $X_1,\cdots, X_n$. It is shown that for any $k$-local discrimination rule, the mean-square difference between the probability of error for the rule and its deleted estimate is bounded by $A/n$ where $A$ is an explicitly given small constant which depends only on $M$ and $k$. Thus distribution-free confidence intervals can be placed about probability of error estimates for $k$-local discrimination rules.