This paper deals with linear regressions \begin{equation*}\tag{1.1}y_k = x_{k1}\beta_1 + \cdots + x_{kq}\beta_q + \epsilon_k, \quad k = 1, 2, \cdots\end{equation*} with given constants $x_{km}$ and with error random variables $\epsilon_k$ that are (a) uncorrelated or (b) independent. Let $E\epsilon_k = 0, 0 < E\epsilon^2_k < \infty$ for all $k$. The individual error distribution functions (d.f.'s) are not assumed to be known, nor need they be identical for all $k$. They are assumed, however, to be elements of a certain set $F$ of d.f.'s. Consider the family of regressions associated with the family of all the error sequences possible under these restrictions. Then conditions on the set $F$ and on the $x_{km}$ are obtained such that the least squares estimators (LSE) of the parameters $\beta_1, \cdots, \beta_q$ are consistent in Case (a) (Theorem 1) or asymptotically normal in Case (b) (Theorem 2) for every regression of the respective families. The motivation for these theorems lies in the fact that under the given assumptions statements based only on the available knowledge must always concern the regression family as a whole. It will be noticed moreover that the conditions of the theorems do not require any knowledge about the particular error sequence occurring in (1.1). Most of the conditions are necessary as well as sufficient, with the consequence that they cannot be improved upon under the limited information assumed to be available about the model. Since the conditions are very mild, the results apply to a large number of actual estimation problems. We denote by $\mathfrak{F}(F)$ the set of all sequences $\{\epsilon_k\}$ that occur in the regressions of a family as characterized above. Thus, $\mathfrak{F}(F)$ comprises all sequences of uncorrelated (Case (a)) or independent (Case (b)) random variables whose d.f.'s belong to $F$ but are not necessarily the same from term to term of the sequence. For each $G \varepsilon F$ the relations $\int x dG = 0$ and $0 < \int x^2 dG < \infty$ hold. In this paper, $\mathfrak{F}(F)$ may be looked upon as a parameter space. A parameter point then is a sequence of $\mathfrak{F}(F)$. Correspondingly, we say that a statement holds on $\mathfrak{F}(F)$ (briefly on $F$) if it holds for all $\{\epsilon_k\} \varepsilon \mathfrak{F}(F)$. The statements of Theorems 1 and 2 are of this kind. The proof of Theorem 1, as well as the proof of the sufficiency in Theorem 2, is elementary and straight forward. Theorem 2 is a special case of a central limit theorem (holding uniformly on $\mathfrak{F}(F)$) for families of random sequences [3]. Some similarity between the roles of the parameter spaces $\mathfrak{F}(F)$ in our theorems and of the parameter spaces that occur, e.g., in the Gauss-Markov and related theorems may be seen in the fact that these theorems remain true only as long as the conclusions in the theorems hold for every parameter point in the respective spaces. As is well known, the statements in the Gauss-Markov and related theorems hold for every parameter vector $\beta_1, \cdots, \beta_q$ in a $q$-dimensional vector space (see e.g. Scheffe 1959, p. 13, 14). A result in the theory of linear regressions that bears some resemblance with the theorems of this paper has been obtained by Grenander and Rosenblatt (1957, p. 244). Let the error sequence $\{\epsilon_k\}$ in (1.1) be a weakly stationary random sequence with piecewise continuous spectral density, and let the regression vectors admit a joint spectral representation. Under these assumptions Grenander and Rosenblatt give necessary and sufficient conditions for the regression spectrum and for the family of admissible spectral densities in order that the LSE are asymptotically efficient for every density of the family. In Sections 3 and 6 we discuss some examples relevant to Theorems 1 and 2.