Lower Bounds for Higher-Order Convex Optimization

Naman Agarwal, Elad Hazan

Research output: Contribution to journalConference articlepeer-review

13 Scopus citations

Abstract

State-of-the-art methods in mathematical optimization employ higher-order derivative information. We explore the limitations of higher-order optimization and prove that even for convex optimization, a polynomial dependence on the approximation guarantee and higher-order smoothness parameters is necessary. This refutes the hope that higher-order smoothness and higher-order derivatives can lead to dimension free polynomial time algorithms for convex optimization. As a special case, we show Nesterov’s accelerated cubic regularization method and higher-order methods to be nearly tight.

Original languageEnglish (US)
Pages (from-to)774-792
Number of pages19
JournalProceedings of Machine Learning Research
Volume75
StatePublished - 2018
Externally publishedYes
Event31st Annual Conference on Learning Theory, COLT 2018 - Stockholm, Sweden
Duration: Jul 6 2018Jul 9 2018

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Keywords

  • Convex Optimization
  • Higher-Order Optimization
  • Lower Bounds
  • Newton’s method
  • Second-Order Optimization

Fingerprint

Dive into the research topics of 'Lower Bounds for Higher-Order Convex Optimization'. Together they form a unique fingerprint.

Cite this