Abstract
In this paper, we study the problem of learning a shallow artificial neural network that best fits a training data set. We study this problem in the over-parameterized regime where the numbers of observations are fewer than the number of parameters in the model. We show that with the quadratic activations, the optimization landscape of training, such shallow neural networks, has certain favorable characteristics that allow globally optimal models to be found efficiently using a variety of local search heuristics. This result holds for an arbitrary training data of input/output pairs. For differentiable activation functions, we also show that gradient descent, when suitably initialized, converges at a linear rate to a globally optimal model. This result focuses on a realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted weight coefficients.
Original language | English (US) |
---|---|
Article number | 8409482 |
Pages (from-to) | 742-769 |
Number of pages | 28 |
Journal | IEEE Transactions on Information Theory |
Volume | 65 |
Issue number | 2 |
DOIs | |
State | Published - Feb 2019 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Information Systems
- Computer Science Applications
- Library and Information Sciences
Keywords
- Nonconvex optimization
- over-parametrized neural networks
- random matrix theory