Sharper lower bounds on the performance of the empirical risk minimization algorithm
Lecué, Guillaume ; Mendelson, Shahar
Bernoulli, Tome 16 (2010) no. 1, p. 605-613 / Harvested from Project Euclid
We present an argument based on the multidimensional and the uniform central limit theorems, proving that, under some geometrical assumptions between the target function T and the learning class F, the excess risk of the empirical risk minimization algorithm is lower bounded by \[\frac{\mathbb{E}\sup_{q\in Q}G_{q}}{\sqrt{n}}\delta\] , ¶ where (Gq)q∈Q is a canonical Gaussian process associated with Q (a well chosen subset of F) and δ is a parameter governing the oscillations of the empirical excess risk function over a small ball in F.
Publié le : 2010-08-15
Classification:  empirical risk minimization,  learning theory,  lower bound,  multidimensional central limit theorem,  uniform central limit theorem
@article{1281099877,
     author = {Lecu\'e, Guillaume and Mendelson, Shahar},
     title = {Sharper lower bounds on the performance of the empirical risk minimization algorithm},
     journal = {Bernoulli},
     volume = {16},
     number = {1},
     year = {2010},
     pages = { 605-613},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1281099877}
}
Lecué, Guillaume; Mendelson, Shahar. Sharper lower bounds on the performance of the empirical risk minimization algorithm. Bernoulli, Tome 16 (2010) no. 1, pp.  605-613. http://gdmltest.u-ga.fr/item/1281099877/