Convergence and consistency of recursive boosting

Aurélie C. Lozano, Sanjeev R. Kulkarni

Research output: Chapter in Book/Report/Conference proceedingConference contribution


We study the convergence and consistency of Boosting algorithms for classification. The standard method, as the sample size increases say from m to m +1, is to re-initialize the Boosting algorithm with an arbitrary prediction rule. In contrast to this "batch" approach, we propose a boosting procedure that is recursive in the sense that for sample size m + 1, the algorithm is re-started with the composite classifier that was obtained for sample size mata specific point, the linking point. We adopt the regularization technique of early stopping, which consists in stopping the procedure based on the 1-norm of the composite classifier. We prove that such recursive boosting methods achieve consistency provided certain stopping and linking points criteria are met. We show that these conditions can be satisfied for widely used loss functions.

Original languageEnglish (US)
Title of host publicationProceedings - 2006 IEEE International Symposium on Information Theory, ISIT 2006
Number of pages5
StatePublished - 2006
Event2006 IEEE International Symposium on Information Theory, ISIT 2006 - Seattle, WA, United States
Duration: Jul 9 2006Jul 14 2006

Publication series

NameIEEE International Symposium on Information Theory - Proceedings
ISSN (Print)2157-8101


Other2006 IEEE International Symposium on Information Theory, ISIT 2006
Country/TerritoryUnited States
CitySeattle, WA

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Information Systems
  • Modeling and Simulation
  • Applied Mathematics


Dive into the research topics of 'Convergence and consistency of recursive boosting'. Together they form a unique fingerprint.

Cite this