Asymptotic Optimality of the Fast Randomized Versions of GCV and $C_L$ in Ridge Regression and Regularization
Girard, Didier A.
Ann. Statist., Tome 19 (1991) no. 1, p. 1950-1963 / Harvested from Project Euclid
Ridge regression is a well-known technique to estimate the coefficients of a linear model. The method of regularization is a similar approach commonly used to solve underdetermined linear equations with discrete noisy data. When applying such a technique, the choice of the smoothing (or regularization) parameter $h$ is crucial. Generalized cross-validation (GCV) and Mallows' $C_L$ are two popular methods for estimating a good value for $h,$ from the data. Their asymptotic properties, such as consistency and asymptotic optimality, have been largely studied [Craven and Wahba (1979); Golub, Heath and Wahba (1979); Speckman (1985)]. Very interesting convergence results for the actual (random) parameter given by GCV and $C_L$ have been shown by Li (1985, 1986). Recently, Girard (1987, 1989) has proposed fast randomized versions of GCV and $C_L.$ The purpose of this paper is to show that the above convergence results also hold for these new methods.
Publié le : 1991-12-14
Classification:  GCV,  $C_L$,  ridge regression,  regularization,  smoothing splines,  Monte Carlo techniques,  randomized versions,  asymptotic optimality,  62G05,  65U05,  65D10,  65R20,  92A07
@article{1176348380,
     author = {Girard, Didier A.},
     title = {Asymptotic Optimality of the Fast Randomized Versions of GCV and $C\_L$ in Ridge Regression and Regularization},
     journal = {Ann. Statist.},
     volume = {19},
     number = {1},
     year = {1991},
     pages = { 1950-1963},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1176348380}
}
Girard, Didier A. Asymptotic Optimality of the Fast Randomized Versions of GCV and $C_L$ in Ridge Regression and Regularization. Ann. Statist., Tome 19 (1991) no. 1, pp.  1950-1963. http://gdmltest.u-ga.fr/item/1176348380/