One observes $n$ data points, $(\mathbf{t}_i, Y_i),$ with the mean of $Y_i$, conditional on the regression function $f,$ equal to $f(\mathbf{t}_i).$ The prior distribution of the vector $\mathbf{f} = (f(\mathbf{t}_1), \ldots, f(\mathbf{t}_n))^t$ is unknown, but lies in a known class $\Omega.$ An estimator, $\hat{\mathbf{f}},$ of $\mathbf{f}$ is found which minimizes the maximum $E\|\hat{\mathbf{f}} - \mathbf{f}\|^2.$ The maximum is taken over all priors in $\Omega$ and the minimum is taken over linear estimators of $\mathbf{f}.$ Asymptotic properties of the estimator are studied in the case that $\mathbf{t}_i$ is one-dimensional and $\Omega$ is the set of priors for which $f$ is smooth.