### Abstract

Boosting algorithms with l _{1}-regularization are of interest because l _{1} regularization leads to sparser composite classifiers. Moreover, Rosset et al. have shown that for separable data, standard l _{p}- regularized loss minimization results in a margin maximizing classifier in the limit as regularization is relaxed. For the case p = 1, we extend these results by obtaining explicit convergence bounds on the regularization required to yield a margin within prescribed accuracy of the maximum achievable margin. We derive similar rates of convergence for the ε-AdaBoost algorithm, in the process providing a new proof that ε-AdaBoost is margin maximizing as ε converges to 0. Because both of these known algorithms are computationally expensive, we introduce a new hybrid algorithm, AdaBoost+L _{1}, that combines the virtues of AdaBoost with the sparsity of l _{1}- regularization in a computationally efficient fashion. We prove that the algorithm is margin maximizing and empirically examine its performance on five datasets.

Original language | English (US) |
---|---|

Pages (from-to) | 615-622 |

Number of pages | 8 |

Journal | Journal of Machine Learning Research |

Volume | 5 |

State | Published - Dec 1 2009 |

### All Science Journal Classification (ASJC) codes

- Control and Systems Engineering
- Software
- Statistics and Probability
- Artificial Intelligence

## Fingerprint Dive into the research topics of 'Speed and sparsity of regularized boosting'. Together they form a unique fingerprint.

## Cite this

*Journal of Machine Learning Research*,

*5*, 615-622.