The rate of convergence of AdaBoost

Indraneel Mukherjee, Cynthia Rudin, Robert E. Schapire

Research output: Contribution to journalConference articlepeer-review

4 Scopus citations

Abstract

The AdaBoost algorithm of Freund and Schapire (1997) was designed to combine many "weak" hypotheses that perform slightly better than a random guess into a "strong" hypo-thesis that has very low error. We study the rate at which AdaBoost iteratively converges to the minimum of the "exponential loss" with a fast rate of convergence. Our proofs do not require a weak-learning assumption, nor do they require that minimizers of the exponential loss are finite. Specifically, our first result shows that at iteration t, the exponential loss of AdaBoost 's computed parameter vector will be at most ε more than that of any parameter vector of ℓ1- norm bounded by B in a number of rounds that is bounded by a polynomial in B and 1/ε. We also provide rate lower bound examples showing a polynomial dependence on these parameters is necessary. Our second result is that within C/ε iterations, AdaBoost achieves a value of the exponential loss that is at most ε more than the best possible value, where C depends on the dataset. We show that this dependence of the rate on ε is optimal up to constant factors, i.e. at least Ω( 1/ε) rounds are necessary to achieve within ε of the optimal exponential loss.

Original languageEnglish (US)
Pages (from-to)537-557
Number of pages21
JournalJournal of Machine Learning Research
Volume19
StatePublished - 2011
Externally publishedYes
Event24th International Conference on Learning Theory, COLT 2011 - Budapest, Hungary
Duration: Jul 9 2011Jul 11 2011

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence
  • Control and Systems Engineering
  • Statistics and Probability

Keywords

  • AdaBoost
  • Convergence rate
  • Coordinate descent
  • Optimization

Fingerprint

Dive into the research topics of 'The rate of convergence of AdaBoost'. Together they form a unique fingerprint.

Cite this