Abstract
We describe an open problem: reduce offline nonconvex stochastic optimization to regret minimization in online convex optimization. The conjectured reduction aims to make progress on explaining the success of adaptive gradient methods for deep learning. A prize of $500 is offered to the winner.
Original language | English (US) |
---|---|
Pages (from-to) | 5317-5324 |
Number of pages | 8 |
Journal | Proceedings of Machine Learning Research |
Volume | 247 |
State | Published - 2024 |
Event | 37th Annual Conference on Learning Theory, COLT 2024 - Edmonton, Canada Duration: Jun 30 2024 → Jul 3 2024 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability