Abstract
State-of-the-art models are now trained with billions of parameters, reaching hardware limits in terms of memory consumption. This has created a recent demand for memory-efficient optimizers. To this end, we investigate the limits and performance tradeoffs of memory-efficient adaptively preconditioned gradient methods. We propose extreme tensoring for high-dimensional stochastic optimization, showing that an optimizer needs very little memory to benefit from adaptive preconditioning. Our technique applies to arbitrary models (not necessarily with tensor-shaped parameters), and is accompanied by regret and convergence guarantees, which shed light on the tradeoffs between preconditioner quality and expressivity. On a large-scale NLP model, we reduce the optimizer memory overhead by three orders of magnitude, without degrading performance.
| Original language | English (US) |
|---|---|
| State | Published - 2020 |
| Event | 8th International Conference on Learning Representations, ICLR 2020 - Addis Ababa, Ethiopia Duration: Apr 30 2020 → … |
Conference
| Conference | 8th International Conference on Learning Representations, ICLR 2020 |
|---|---|
| Country/Territory | Ethiopia |
| City | Addis Ababa |
| Period | 4/30/20 → … |
All Science Journal Classification (ASJC) codes
- Education
- Linguistics and Language
- Language and Linguistics
- Computer Science Applications
Fingerprint
Dive into the research topics of 'EXTREME TENSORING FOR LOW-MEMORY PRECONDITIONING'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver