Open Problem: Black-Box Reductions & Adaptive Gradient Methods for Nonconvex Optimization

Xinyi Chen, Elad Hazan

Research output: Contribution to journalConference articlepeer-review

Abstract

We describe an open problem: reduce offline nonconvex stochastic optimization to regret minimization in online convex optimization. The conjectured reduction aims to make progress on explaining the success of adaptive gradient methods for deep learning. A prize of $500 is offered to the winner.

Original languageEnglish (US)
Pages (from-to)5317-5324
Number of pages8
JournalProceedings of Machine Learning Research
Volume247
StatePublished - 2024
Event37th Annual Conference on Learning Theory, COLT 2024 - Edmonton, Canada
Duration: Jun 30 2024Jul 3 2024

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Open Problem: Black-Box Reductions & Adaptive Gradient Methods for Nonconvex Optimization'. Together they form a unique fingerprint.

Cite this