Theoretical insights into the optimization landscape of over-parameterized shallow neural networks

Mahdi Soltanolkotabi, Adel Javanmard, Jason D. Lee

Research output: Contribution to journalArticle

18 Scopus citations

Abstract

In this paper, we study the problem of learning a shallow artificial neural network that best fits a training data set. We study this problem in the over-parameterized regime where the numbers of observations are fewer than the number of parameters in the model. We show that with the quadratic activations, the optimization landscape of training, such shallow neural networks, has certain favorable characteristics that allow globally optimal models to be found efficiently using a variety of local search heuristics. This result holds for an arbitrary training data of input/output pairs. For differentiable activation functions, we also show that gradient descent, when suitably initialized, converges at a linear rate to a globally optimal model. This result focuses on a realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted weight coefficients.

Original languageEnglish (US)
Article number8409482
Pages (from-to)742-769
Number of pages28
JournalIEEE Transactions on Information Theory
Volume65
Issue number2
DOIs
StatePublished - Feb 2019
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Computer Science Applications
  • Library and Information Sciences

Keywords

  • Nonconvex optimization
  • over-parametrized neural networks
  • random matrix theory

Fingerprint Dive into the research topics of 'Theoretical insights into the optimization landscape of over-parameterized shallow neural networks'. Together they form a unique fingerprint.

  • Cite this