L2 boosting in kernel regression
Park, B.U. ; Lee, Y.K. ; Ha, S.
Bernoulli, Tome 15 (2009) no. 1, p. 599-613 / Harvested from Project Euclid
In this paper, we investigate the theoretical and empirical properties of L2 boosting with kernel regression estimates as weak learners. We show that each step of L2 boosting reduces the bias of the estimate by two orders of magnitude, while it does not deteriorate the order of the variance. We illustrate the theoretical findings by some simulated examples. Also, we demonstrate that L2 boosting is superior to the use of higher-order kernels, which is a well-known method of reducing the bias of the kernel estimate.
Publié le : 2009-08-15
Classification:  bias reduction,  boosting,  kernel regression,  Nadaraya–Watson smoother,  twicing
@article{1251463273,
     author = {Park, B.U. and Lee, Y.K. and Ha, S.},
     title = {L<sub>2</sub> boosting in kernel regression},
     journal = {Bernoulli},
     volume = {15},
     number = {1},
     year = {2009},
     pages = { 599-613},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1251463273}
}
Park, B.U.; Lee, Y.K.; Ha, S. L2 boosting in kernel regression. Bernoulli, Tome 15 (2009) no. 1, pp.  599-613. http://gdmltest.u-ga.fr/item/1251463273/