Boosting the margin: A new explanation for the effectiveness of voting methods

Robert E. Schapire, Yoav Freund, Peter Bartlett, Wee Sun Lee

Research output: Contribution to journalArticlepeer-review

1723 Scopus citations

Abstract

One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition.

Original languageEnglish (US)
Pages (from-to)1651-1686
Number of pages36
JournalAnnals of Statistics
Volume26
Issue number5
DOIs
StatePublished - Oct 1998
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Statistics and Probability
  • Statistics, Probability and Uncertainty

Keywords

  • Bagging
  • Boosting
  • Decision trees
  • Ensemble methods
  • Error-correcting
  • Markov chain
  • Monte Carlo
  • Neural networks
  • Output coding

Fingerprint

Dive into the research topics of 'Boosting the margin: A new explanation for the effectiveness of voting methods'. Together they form a unique fingerprint.

Cite this