### Abstract

We extend the theory of boosting for regression problems to the online learning setting. Generalizing from the batch setting for boosting, the notion of a weak learning algorithm is modeled as an online learning algorithm with linear loss functions that competes with a base class of regression functions, while a strong learning algorithm is an online learning algorithm with smooth convex loss functions that competes with a larger class of regression functions. Our main result is an online gradient boosting algorithm that converts a weak online learning algorithm into a strong one where the larger class of functions is the linear span of the base class. We also give a simpler boosting algorithm that converts a weak online learning algorithm into a strong one where the larger class of functions is the convex hull of the base class, and prove its optimality.

Original language | English (US) |
---|---|

Pages (from-to) | 2458-2466 |

Number of pages | 9 |

Journal | Advances in Neural Information Processing Systems |

Volume | 2015-January |

State | Published - Jan 1 2015 |

Event | 29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada Duration: Dec 7 2015 → Dec 12 2015 |

### All Science Journal Classification (ASJC) codes

- Computer Networks and Communications
- Information Systems
- Signal Processing

## Fingerprint Dive into the research topics of 'Online gradient boosting'. Together they form a unique fingerprint.

## Cite this

*Advances in Neural Information Processing Systems*,

*2015-January*, 2458-2466.