TY - GEN
T1 - Adaptive online gradient descent
AU - Bartlett, Peter L.
AU - Hazan, Elad
AU - Rakhlin, Alexander
PY - 2008
Y1 - 2008
N2 - We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between √ T and log T. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.
AB - We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between √ T and log T. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.
UR - http://www.scopus.com/inward/record.url?scp=85162021730&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85162021730&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85162021730
SN - 160560352X
SN - 9781605603520
T3 - Advances in Neural Information Processing Systems 20 - Proceedings of the 2007 Conference
BT - Advances in Neural Information Processing Systems 20 - Proceedings of the 2007 Conference
PB - Neural Information Processing Systems
T2 - 21st Annual Conference on Neural Information Processing Systems, NIPS 2007
Y2 - 3 December 2007 through 6 December 2007
ER -