Assume the standard linear model $X_{n \times 1} = A_{n \times p} \theta_{p \times 1} + \varepsilon_{n \times 1},$ where $\varepsilon$ has an $n$-variate normal distribution with zero mean vector and identity covariance matrix. The least squares estimator for the coefficient $\theta$ is $\hat{\theta} \equiv (A'A)^{-1}A'X$. It is well known that $\hat{\theta}$ is dominated by James-Stein type estimators under the sum of squared error loss $|\theta - \hat{\theta}|^2$ when $p \geq 3$. In this article we discuss the possibility of improving upon $\hat{\theta}$, simultaneously under the "universal" class of losses: $\{L(|\theta - \hat{\theta}|): L(\cdot) \text{any nondecreasing function}\}.$ An estimator that can be so improved is called universally inadmissible ($U$-inadmissible). Otherwise it is called $U$-admissible. We prove that $\hat{\theta}$ is $U$-admissible for any $p$ when $A'A = I$. Furthermore, if $A'A \neq I$, then $\hat{\theta}$ is $U$-inadmissible if $p$ is "large enough." In a special case, $p \geq 4$ is large enough. The results are surprising. Implications are discussed.