Boosting is one of the most important recent developments in
classification methodology. Boosting works by sequentially applying a
classification algorithm to reweighted versions of the training data and then
taking a weighted majority vote of the sequence of classifiers thus produced.
For many classification algorithms, this simple strategy results in dramatic
improvements in performance. We show that this seemingly mysterious phenomenon
can be understood in terms of well-known statistical principles, namely
additive modeling and maximum likelihood. For the two-class problem, boosting
can be viewed as an approximation to additive modeling on the logistic scale
using maximum Bernoulli likelihood as a criterion. We develop more direct
approximations and show that they exhibit nearly identical results to boosting.
Direct multiclass generalizations based on multinomial likelihood are derived
that exhibit performance comparable to other recently proposed multiclass
generalizations of boosting in most situations, and far superior in some. We
suggest a minor modification to boosting that can reduce computation, often by
factors of 10 to 50. Finally, we apply these insights to produce an alternative
formulation of boosting decision trees. This approach, based on best-first
truncated tree induction, often leads to better performance, and can provide
interpretable descriptions of the aggregate decision rule. It is also much
faster computationally, making it more suitable to large-scale data mining
applications.