Beyond the regret minimization barrier: An optimal algorithm for stochastic strongly-convex optimization

Elad Hazan, Satyen Kale

Research output: Contribution to journalConference articlepeer-review

72 Scopus citations

Abstract

We give a novel algorithm for stochastic strongly-convex optimization in the gradient oracle model which returns an O(1/T)-approximate solution after T gradient updates. This rate of convergence is optimal in the gradient oracle model. This improves upon the previously known best rate of O(log(T)/T ), which was obtained by applying an online strongly-convex optimization algorithm with regret O(log(T)) to the batch setting. We complement this result by proving that any algorithm has expected regret of Ω (log(T)) in the online stochastic strongly-convex optimization setting. This lower bound holds even in the full-information setting which reveals more information to the algorithm than just gradients. This shows that any online-to-batch conversion is inherently suboptimal for stochastic strongly-convex optimization. This is the first formal evidence that online convex optimization is strictly more difficult than batch stochastic convex optimization.

Original languageEnglish (US)
Pages (from-to)421-436
Number of pages16
JournalJournal of Machine Learning Research
Volume19
StatePublished - 2011
Externally publishedYes
Event24th International Conference on Learning Theory, COLT 2011 - Budapest, Hungary
Duration: Jul 9 2011Jul 11 2011

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Keywords

  • Regret Minimization
  • Stochastic Optimization

Fingerprint

Dive into the research topics of 'Beyond the regret minimization barrier: An optimal algorithm for stochastic strongly-convex optimization'. Together they form a unique fingerprint.

Cite this