It has been suggested ([5], [6]) to use the expected value, $E(l)$, of the length, $l$, of a confidence interval for the variable to be predicted as a measure of precision of prediction in a regression analysis. The measure is relevant only if the predictor variables are random. Criticism of this particular choice of a measure of precision usually centres about the following questions: a. Why is the measure based on this and no other system of confidence intervals; what is known about optimality of this system. (The system referred to will be described in Section 2.) b. Why is it base don the physical length of the intervals and not on Neyman's "shortness"? c. If it is agreed that it is based on $l$, what justifies the choice of $E(l)$? In the following a few points are raised which are, of course, not sufficient to prove that $E(l)$ is the only possible choice for a measure of precision, but which indicate that the intuitive choice is not altogether unreasonable. Not much can be said about a. It turns out, in Section 3, that the confidence limits discussed here are unbiased, but nothing about optimality in any sense is known to the author. To throw some light on b, Neyman's shortness of the system of confidence intervals used is calculated in Section 3. A parameter enters the problem which makes it impossible to use Neyman's shortness as an overall measure of precision. With regard to c, one may argue that $l$ is a random variable and if a single measure of precision is needed a single characteristic of its distribution must be used. Obvious possibilities are the mean and the median. The distribution of $l$ is obtained in Section 4, and it becomes apparent that the use of the median would involve heavy numerical calculations.