Recent work has shown that combining multiple versions of unstable
classifiers such as trees or neural nets results in reduced test set error. One
of the more effective is bagging. Here, modified training sets are formed by
resampling from the original training set, classifiers constructed using these
training sets and then combined by voting. Freund and Schapire propose an
algorithm the basis of which is to adaptively resample and combine (hence the
acronym “arcing”) so that the weights in the resampling are
increased for those cases most often misclassified and the combining is done by
weighted voting. Arcing is more successful than bagging in test set error
reduction. We explore two arcing algorithms, compare them to each other and to
bagging, and try to understand how arcing works. We introduce the definitions
of bias and variance for a classifier as components of the test set error.
Unstable classifiers can have low bias on a large range of data sets. Their
problem is high variance. Combining multiple versions either through bagging or
arcing reduces variance significantly.
@article{1024691079,
author = {Breiman, Leo},
title = {Arcing classifier (with discussion and a rejoinder by the
author)},
journal = {Ann. Statist.},
volume = {26},
number = {3},
year = {1998},
pages = { 801-849},
language = {en},
url = {http://dml.mathdoc.fr/item/1024691079}
}
Breiman, Leo. Arcing classifier (with discussion and a rejoinder by the
author). Ann. Statist., Tome 26 (1998) no. 3, pp. 801-849. http://gdmltest.u-ga.fr/item/1024691079/