Fast learning rates in statistical inference through aggregation
Audibert, Jean-Yves
Ann. Statist., Tome 37 (2009) no. 1, p. 1591-1646 / Harvested from Project Euclid
We develop minimax optimal risk bounds for the general learning task consisting in predicting as well as the best function in a reference set $\mathcal{G}$ up to the smallest possible additive term, called the convergence rate. When the reference set is finite and when n denotes the size of the training data, we provide minimax convergence rates of the form $C(\frac{\log|\mathcal{G}|}{n})^{v}$ with tight evaluation of the positive constant C and with exact 0q-regression setting for which an exhaustive analysis of the convergence rates is given while q ranges in [1; +∞[.
Publié le : 2009-08-15
Classification:  Statistical learning,  fast rates of convergence,  aggregation,  L_q-regression,  lower bounds in VC-classes,  excess risk,  convex loss,  minimax lower bounds,  62G08,  62H05,  68T10
@article{1245332827,
     author = {Audibert, Jean-Yves},
     title = {Fast learning rates in statistical inference through aggregation},
     journal = {Ann. Statist.},
     volume = {37},
     number = {1},
     year = {2009},
     pages = { 1591-1646},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1245332827}
}
Audibert, Jean-Yves. Fast learning rates in statistical inference through aggregation. Ann. Statist., Tome 37 (2009) no. 1, pp.  1591-1646. http://gdmltest.u-ga.fr/item/1245332827/