For comparative experiments with two or more treatments, rank methods posses, in their insensitivity to gross errors and extreme observations, a distinct advantage over the classical normal theory procedures (see [10]) (beside providing exact significance level when the form of the underlying distribution is unknown). First of the rank tests developed was the Wilcoxon-two-sample test, subsequently generalized to the $K$-sample problem by Kruskal and Wallis (see [12]). Both these tests have been shown to possess asymptotic (Pitman) efficiency equal to $3/\pi$ (against normal shift) relative to the classical $t$- and $\mathscr{F}$-tests respectively (see Andrews [1]). However, in many comparative experiments, it is desirable in the interest of increased precision to stratify the population or divide the experimental subjects into homogeneous (randomized) blocks. For such experimental designs, the first attempt at providing a rank test was made as far back as 1937 by Friedman [5] (for the one observation per cell case), who proposed a test based on independent rankings of observations within each block. This procedure, which we shall refer to in the sequel as the separate-ranking procedure, was extended subsequently to more general designs by Durbin [4] and Benard and van Elteren [2]. Van Elteren and Noether [22] computed the asymptotic efficiency of the separate-ranking procedure and showed (for the one observation per cell case) that relative to the normal theory $\mathscr{F}$-statistic its efficiency (against normal shift) is $3K/\pi(K + 1)$ (which takes the value $2/\pi$ for $K = 2$ and increases to $3/\pi$ as $K \rightarrow \infty)$. In 1962, however, Hodges and Lehmann [9] pointed out that the rather low efficiency of the separate-ranking procedure was due, presumably, to the absence of interblock comparisons and proposed a conditional test based on a combined ranking of all the observations after "alignment" (defined below) within each block (see also Mehra [17]). Subsequently, Lehmann in a series of papers [13], [14], [15] laid the foundations of an entirely new and remarkable approach to nonparametric inference parallel to the classical normal theory (parametric) analysis of variance. However the question of asymptotic efficiency of the test proposed in [9] was left essentially unanswered. It is the purpose of the present paper to study the asymptotic efficiency of the conditional test proposed in [9]. In Section 2, the asymptotic version of this test is discussed. In Section 3, limit distributions under translation alternatives are obtained. Section 4 contains a discussion of the asymptotic efficiency and Section 5 consists of certain concluding remarks.