Suppose $X_1, X_2, \cdots, X_n$ are known to be independently and identically distributed, each with the density function $f(x),$ with $\int^1_0f(x) dx = 1.$ Let $Y_1 \leqq Y_2 \leqq \cdots \leqq Y_n$ be the ordered values of $X_1, X_2, \cdots, X_n,$ and define $W_1 = Y_1, W_2 = Y_2 - Y_1, \cdots, W_n = Y_n - Y_{n -1},$ and $W_{n + 1} = 1 - Y_n,$ so that $W_1 + \cdots + W_{n + 1} = 1.$ Finally, define $Z_1, \cdots, Z_{n + 1}$ as the ordered values of $W_1, \cdots, W_{n + 1},$ so that $0 \leqq Z_1 \leqq Z_2 \leqq \cdots \leqq Z_{n + 1},$ with $Z_1 + \cdots + Z_{n+1} = 1.$ We are going to test the hypothesis that $f(x) = 1$ for $0 < x < 1,$ and we are going to consider only tests based on $Z_1, Z_2, \cdots, Z_n.$ The intutitive justification for this is that, roughly speaking, deviations from the hypothesis on any part of the unit interval are treated alike. Several authors have discussed tests based on $Z_1, \cdots, Z_n.$ (See references [1], [2], [3].) If $u$ is a number greater than unity, it is shown that the test of the form "reject the hypothesis if $Z^u_1 + \cdots + Z^u_{n+1} > K"$ is consistent against a very wide class of alternatives. When $u = 2,$ the resulting test has some desirable properties with respect to alternatives with linear density functions.