Abstract
Boosting algorithms with l 1-regularization are of interest because l 1 regularization leads to sparser composite classifiers. Moreover, Rosset et al. have shown that for separable data, standard l p- regularized loss minimization results in a margin maximizing classifier in the limit as regularization is relaxed. For the case p = 1, we extend these results by obtaining explicit convergence bounds on the regularization required to yield a margin within prescribed accuracy of the maximum achievable margin. We derive similar rates of convergence for the ε-AdaBoost algorithm, in the process providing a new proof that ε-AdaBoost is margin maximizing as ε converges to 0. Because both of these known algorithms are computationally expensive, we introduce a new hybrid algorithm, AdaBoost+L 1, that combines the virtues of AdaBoost with the sparsity of l 1- regularization in a computationally efficient fashion. We prove that the algorithm is margin maximizing and empirically examine its performance on five datasets.
Original language | English (US) |
---|---|
Pages (from-to) | 615-622 |
Number of pages | 8 |
Journal | Journal of Machine Learning Research |
Volume | 5 |
State | Published - 2009 |
Event | 12th International Conference on Artificial Intelligence and Statistics, AISTATS 2009 - Clearwater, FL, United States Duration: Apr 16 2009 → Apr 18 2009 |
All Science Journal Classification (ASJC) codes
- Control and Systems Engineering
- Software
- Statistics and Probability
- Artificial Intelligence