Abstract
Conventional wisdom in deep learning states that increasing depth improves expressiveness but complicates optimization. This paper suggests that, sometimes, increasing depth can speed up optimization. The effect of depth on optimization is decoupled from expressiveness by focusing on settings where additional layers amount to over-parameterization – linear neural networks, a well-studied model. Theoretical analysis, as well as experiments, show that here depth acts as a pre-conditioner which may accelerate convergence. Even on simple convex problems such as linear regression with ℓp loss, p > 2, gradient descent can benefit from transitioning to a non-convex overparameterized objective, more than it would from some common acceleration schemes. We also prove that it is mathematically impossible to obtain the acceleration effect of overparametriza-tion via gradients of any regularizer.
| Original language | English (US) |
|---|---|
| Pages (from-to) | 244-253 |
| Number of pages | 10 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 80 |
| State | Published - 2018 |
| Externally published | Yes |
| Event | 35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden Duration: Jul 10 2018 → Jul 15 2018 |
All Science Journal Classification (ASJC) codes
- Software
- Control and Systems Engineering
- Statistics and Probability
- Artificial Intelligence