Adaptive online gradient descent

Peter L. Bartlett, Elad Hazan, Alexander Rakhlin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

91 Scopus citations

Abstract

We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between √ T and log T. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems 20 - Proceedings of the 2007 Conference
PublisherNeural Information Processing Systems
ISBN (Print)160560352X, 9781605603520
StatePublished - 2008
Externally publishedYes
Event21st Annual Conference on Neural Information Processing Systems, NIPS 2007 - Vancouver, BC, Canada
Duration: Dec 3 2007Dec 6 2007

Publication series

NameAdvances in Neural Information Processing Systems 20 - Proceedings of the 2007 Conference

Other

Other21st Annual Conference on Neural Information Processing Systems, NIPS 2007
Country/TerritoryCanada
CityVancouver, BC
Period12/3/0712/6/07

All Science Journal Classification (ASJC) codes

  • Information Systems

Fingerprint

Dive into the research topics of 'Adaptive online gradient descent'. Together they form a unique fingerprint.

Cite this