Learning by mirror averaging
Juditsky, A. ; Rigollet, P. ; Tsybakov, A. B.
Ann. Statist., Tome 36 (2008) no. 1, p. 2183-2206 / Harvested from Project Euclid
Given a finite collection of estimators or classifiers, we study the problem of model selection type aggregation, that is, we construct a new estimator or classifier, called aggregate, which is nearly as good as the best among them with respect to a given risk criterion. We define our aggregate by a simple recursive procedure which solves an auxiliary stochastic linear programming problem related to the original nonlinear one and constitutes a special case of the mirror averaging algorithm. We show that the aggregate satisfies sharp oracle inequalities under some general assumptions. The results are applied to several problems including regression, classification and density estimation.
Publié le : 2008-10-15
Classification:  Learning,  aggregation,  oracle inequalities,  mirror averaging,  model selection,  stochastic optimization,  62G08,  62C20,  62G05,  62G20
@article{1223908089,
     author = {Juditsky, A. and Rigollet, P. and Tsybakov, A. B.},
     title = {Learning by mirror averaging},
     journal = {Ann. Statist.},
     volume = {36},
     number = {1},
     year = {2008},
     pages = { 2183-2206},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1223908089}
}
Juditsky, A.; Rigollet, P.; Tsybakov, A. B. Learning by mirror averaging. Ann. Statist., Tome 36 (2008) no. 1, pp.  2183-2206. http://gdmltest.u-ga.fr/item/1223908089/