Speed and sparsity of regularized boosting

Yongxin Taylor Xi, Zhen James Xiang, Peter J. Ramadge, Robert E. Schapire

Research output: Contribution to journalArticlepeer-review

13 Scopus citations


Boosting algorithms with l 1-regularization are of interest because l 1 regularization leads to sparser composite classifiers. Moreover, Rosset et al. have shown that for separable data, standard l p- regularized loss minimization results in a margin maximizing classifier in the limit as regularization is relaxed. For the case p = 1, we extend these results by obtaining explicit convergence bounds on the regularization required to yield a margin within prescribed accuracy of the maximum achievable margin. We derive similar rates of convergence for the ε-AdaBoost algorithm, in the process providing a new proof that ε-AdaBoost is margin maximizing as ε converges to 0. Because both of these known algorithms are computationally expensive, we introduce a new hybrid algorithm, AdaBoost+L 1, that combines the virtues of AdaBoost with the sparsity of l 1- regularization in a computationally efficient fashion. We prove that the algorithm is margin maximizing and empirically examine its performance on five datasets.

Original languageEnglish (US)
Pages (from-to)615-622
Number of pages8
JournalJournal of Machine Learning Research
StatePublished - Dec 1 2009

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Software
  • Statistics and Probability
  • Artificial Intelligence


Dive into the research topics of 'Speed and sparsity of regularized boosting'. Together they form a unique fingerprint.

Cite this