Online gradient boosting

Alina Beygelzimer, Elad E. Hazan, Satyen Kale, Haipeng Luo

Research output: Contribution to journalConference article

21 Scopus citations

Abstract

We extend the theory of boosting for regression problems to the online learning setting. Generalizing from the batch setting for boosting, the notion of a weak learning algorithm is modeled as an online learning algorithm with linear loss functions that competes with a base class of regression functions, while a strong learning algorithm is an online learning algorithm with smooth convex loss functions that competes with a larger class of regression functions. Our main result is an online gradient boosting algorithm that converts a weak online learning algorithm into a strong one where the larger class of functions is the linear span of the base class. We also give a simpler boosting algorithm that converts a weak online learning algorithm into a strong one where the larger class of functions is the convex hull of the base class, and prove its optimality.

Original languageEnglish (US)
Pages (from-to)2458-2466
Number of pages9
JournalAdvances in Neural Information Processing Systems
Volume2015-January
StatePublished - Jan 1 2015
Event29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada
Duration: Dec 7 2015Dec 12 2015

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint Dive into the research topics of 'Online gradient boosting'. Together they form a unique fingerprint.

  • Cite this

    Beygelzimer, A., Hazan, E. E., Kale, S., & Luo, H. (2015). Online gradient boosting. Advances in Neural Information Processing Systems, 2015-January, 2458-2466.