The confidence sets for linear functions $\mu + \lambda\sigma^2$ of the mean $\mu$ and variance $\sigma^2$ of a normal distribution, defined in terms of the uniformly most powerful unbiased level $\alpha$ tests of hypotheses of form $H_0(\lambda, m): \mu + \lambda\sigma^2 = m$ against the two-sided alternative $H_1(\lambda, m): \mu + \lambda\sigma^2 \neq m$ for $-\infty < m < \infty$, for fixed $\alpha$ and $\lambda$, are shown to be intervals if the number of degrees of freedom for estimating $\sigma^2$ is $\geqq 2$.