Lower Bounds for Higher-Order Convex Optimization

Naman Agarwal, Elad Hazan

Research output: Contribution to journalConference articlepeer-review

11 Scopus citations


State-of-the-art methods in mathematical optimization employ higher-order derivative information. We explore the limitations of higher-order optimization and prove that even for convex optimization, a polynomial dependence on the approximation guarantee and higher-order smoothness parameters is necessary. This refutes the hope that higher-order smoothness and higher-order derivatives can lead to dimension free polynomial time algorithms for convex optimization. As a special case, we show Nesterov’s accelerated cubic regularization method and higher-order methods to be nearly tight.

Original languageEnglish (US)
Pages (from-to)774-792
Number of pages19
JournalProceedings of Machine Learning Research
StatePublished - 2018
Externally publishedYes
Event31st Annual Conference on Learning Theory, COLT 2018 - Stockholm, Sweden
Duration: Jul 6 2018Jul 9 2018

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability


  • Convex Optimization
  • Higher-Order Optimization
  • Lower Bounds
  • Newton’s method
  • Second-Order Optimization


Dive into the research topics of 'Lower Bounds for Higher-Order Convex Optimization'. Together they form a unique fingerprint.

Cite this