Let the random variables $X_{i1}, X_{i2}, \cdots, X_{in}, i = 1, 2$, be real valued and independent with density functions $f(x - \theta_i) (\theta_i \text{real}), i = 1, 2$, (with respect to Lebesgue measure). We take $\int_{-\infty}^\infty xf(x) dx = 0$ with no loss of generality. The problem considered here is estimation of the function $\varphi(\theta_1, \theta_2) = \text{maximum} (\theta_1, \theta_2)$ with a squared error loss function. Questions of minimaxity and admissibility of certain natural estimators are considered. This problem is the estimation analogue of the well known ranking and selection problem, which has received considerable attention in the past. For a bibliography see Bechhofer, Kiefer, and Sobel (1968). Whereas previous work is concerned with choosing the population with the larger parameter, here we are concerned with estimating the larger parameter. Consider for the moment, the case where $n = 1$. A natural estimator of $\varphi(\theta_1, \theta_2)$ is $\varphi(X_{11}, X_{21}) = \text{maximum of} (X_{11}, X_{21})$. This estimator is symmetric in the observations and invariant under translations which take $(\theta_1, \theta_2)$ to $(\theta_1 + a, \theta_2 + a)$, for any real constant $a$. Under suitable conditions, one of which is that $f$ be symmetric, (the conditions will be stated precisely for general $n$ in the next paragraph), it is shown that $\varphi(X_{11}, X_{21})$ is minimax. However $\varphi(X_{11}, X_{21})$ is not in general admissible. Furthermore, when $f$ is not symmetric $\varphi(X_{11}, X_{21})$ also need not be minimax. Another estimate with the invariance properties stated above is the a posteriori expected value of $\varphi(\theta_1, \theta_2)$, given $(X_{11}, X_{21})$, when the generalized prior distribution of $(\theta_1, \theta_2)$ is taken to be the uniform distribution over the two dimensional plane. For many estimation problems, such an adaptation of the Pitman estimator is known to be minimax. However, for this problem, under suitable conditions (again including that $f$ be symmetric), this estimator need not be minimax. It is true though, that this estimator is admissible. In order to summarize the results for general $n$, it is convenient to define the analogues of the estimates considered above. Let $X_i$ be the a posteriori expected value of $\theta_i$, given $X_{ij}, j = 1, 2, \cdots, n$, when $\theta_i$ has the generalized uniform distribution as a priori distribution. That is, \begin{equation*}\tag{1.1}X_i = \int \theta_i \prod^n_{j = 1} f(X_{ij} - \theta_i) d\theta_i/\int \prod^n_{j = 1} f(X_{ij} - \theta_i) d\theta_i, i = 1, 2.\end{equation*} Note $X_i$ is the usual Pitman estimator of $\theta_i$ and thus if $f$ is the normal density then $X_i = \bar{X}_i$. Let \begin{equation*}\tag{1.2}\varphi(X_{11}, X_{12}, \cdots, X_{1n}, X_{21}, \cdots, X_{2n}) = \varphi(X_1, X_2) = \text{maximum of} (X_1, X_2).\end{equation*} Also define $\delta^\ast$ to be the a posteriori expected value of $\varphi(\theta_1, \theta_2)$, given $X_{ij}, i = 1, 2, j = 1, 2, \cdots, n$, when the generalized prior distribution of $(\theta_1, \theta_2)$ is the uniform distribution on the two dimensional plane. That is, \begin{equation*}\begin{align*}\tag{1.3}\delta^\ast(X_{11}, X_{12}, \cdots, X_{2n}) \\ &= \iint \varphi(\theta_1, \theta_2) \prod^n_{j = 1}f (X_{1j} - \theta_1) \prod^n_{j = 1}f(X_{2j} - \theta_2) d\theta_1 d\theta_2 \\ \cdot\lbrack\iint \prod^n_{j = 1}f(X_{1j} - \theta_1) \prod^n_{j = 1}f (X_{2j} - \theta_2) d\theta_1 d\theta_2\rbrack^{-1}\end{align*}. \end{equation*} In order to explicitly state the main results it is convenient to introduce some notation which will be more formally presented in the next section. Recall $X_i$ is the usual Pitman estimator of $\theta_i$. Let $Y_i = (Y_{i1}, Y_{i2}, \cdots, Y_{i, n - 1})$ where $Y_{ij} = X_{i, j + 1} - X_{i1}$. Let $p(x, y)$ be the conditional density of $X_i$ given $Y_i$ when $\theta_i = 0$. Now for the problem of estimating $\varphi(\theta_1, \theta_2)$ the main results are as follows: (a) If $p(x, y) = p(-x, y)$ and $EE\lbrack(X^2_1 + X^2_2) |Y_1, Y_2\rbrack < \infty$, then $\varphi(X_1, X_2)$ is minimax. The proof of this result is based on an idea of Farrell (1964). (b) If $p(x, y) \neq p(-x, y)$, then the problem is complicated and in fact an example is given showing that $\varphi(\cdot)$ is not minimax. (c) The estimator $\varphi(\cdot)$ need not be admissible. When $f$ is normal, it is easily shown, using a theorem and remark of Sacks (1963) that $\varphi(\cdot)$ is inadmissible (Section 4). (d) if $E\{E^2\lbrack(X^2_1 + X^2_2)\cdot|\log (X^2_1 + X^2_2)|^\beta|Y_1, Y_2\rbrack\} < \infty$ for some $\beta > 0$, then $\delta^\ast(\cdot)$ is admissible. The proof of this fact depends on results of Stein (1959b) and James and Stein (1961). (e) The estimator $\delta^\ast$ need not be minimax and in Section 3 it is shown that frequently $\delta^\ast$ is not minimax. Despite the inadmissibility of $\varphi(X_1, X_2)$, this estimator has great intuitive appeal, is easy to use, and we have not succeeded in finding an estimate whose risk improves on the risk of $\varphi$. Also it is minimax under the restrictions noted above. Furthermore, if the problem was to simultaneously decide which population has the larger mean and estimate the larger mean, then a reasonable formulation for which $\varphi(X_1, X_2)$ is an admissible estimator of $\varphi(\theta_1, \theta_2)$ could easily be found. See for example Cohen (1965). The admissible $\delta^\ast$, on the other hand, is more difficult to compute and has a larger bias than $\varphi(X_1, X_2)$. In a future paper, we shall discuss topics such as the general question of unbiased estimation of $\varphi(\theta_1, \theta_2)$, the maximum likelihood estimate and its properties, and the generation of invariant Bayes estimates of which $\delta^\ast$ is a particular example. As we develop the main results, generalizations will be indicated. In the next section we introduce the notation for a more general model than as given earlier and develop some preliminaries. In Section 3 we give the minimax results while in Section 4 we give the admissibility results.