This paper is concerned with Gaussian regression with random design, where the observations are independent and identically distributed. It is known from work by Le Cam that the rate of convergence of optimal estimators is closely connected to the metric structure of the parameter space with respect to the Hellinger distance. In particular, this metric structure essentially determines the risk when the loss function is a power of the Hellinger distance. For random design regression, one typically uses as loss function the squared L2-distance between the estimator and the parameter. If the parameter space is bounded with respect to the L∞-norm, both distances are equivalent. Without this assumption, it may happen that there is a large distortion between the two distances, resulting in some unusual rates of convergence for the squared L2-risk, as noticed by Baraud. We explain this phenomenon and then show that the use of the Hellinger distance instead of the L2-distance allows us to recover the usual rates and to carry out model selection in great generality. An extension to the L2-risk is given under a boundedness assumption similar to that given by Wegkamp and by Yang.