Rates of convergence in active learning
Hanneke, Steve
Ann. Statist., Tome 39 (2011) no. 1, p. 333-361 / Harvested from Project Euclid
We study the rates of convergence in generalization error achievable by active learning under various types of label noise. Additionally, we study the general problem of model selection for active learning with a nested hierarchy of hypothesis classes and propose an algorithm whose error rate provably converges to the best achievable error among classifiers in the hierarchy at a rate adaptive to both the complexity of the optimal classifier and the noise conditions. In particular, we state sufficient conditions for these rates to be dramatically faster than those achievable by passive learning.
Publié le : 2011-02-15
Classification:  Active learning,  sequential design,  selective sampling,  statistical learning theory,  oracle inequalities,  model selection,  classification,  62L05,  68Q32,  62H30,  68T05,  68T10,  68Q10,  68Q25,  68W40,  62G99
@article{1291388378,
     author = {Hanneke, Steve},
     title = {Rates of convergence in active learning},
     journal = {Ann. Statist.},
     volume = {39},
     number = {1},
     year = {2011},
     pages = { 333-361},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1291388378}
}
Hanneke, Steve. Rates of convergence in active learning. Ann. Statist., Tome 39 (2011) no. 1, pp.  333-361. http://gdmltest.u-ga.fr/item/1291388378/