Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima

Simon S. Du, Jason D. Lee, Yuandong Tian, Barnabas Poczos, Aarti Singh

Research output: Chapter in Book/Report/Conference proceedingConference contribution

40 Scopus citations

Abstract

We consider the problem of learning a one-hidden- layer neural network with non-overlapping con- volutional layer and ReLU activation, i.e., /(Z, w, a) = aJa(wTZj), in which both the convolutional weights w and the output weights a are parameters to be learned. When the labels are the outputs from a teacher network of the same architecture with fixed weights (w∗, a∗), we prove that with Gaussian input Z, there is a spurious local minimizer. Surprisingly, in the presence of the spurious local minimizer, gradient descent with weight normalization from randomly initialized weights can still be proven to recover the true parameters with constant probability, which can be boosted to probability 1 with multiple restarts. Wc also show that with constant probability, the same procedure could also converge to the spurious local minimum, showing that the local minimum plays a non-trivial role in the dynamics of gradient descent. Furthermore, a quantitative analysis shows that the gradient descent dynamics has two phases: it starts off slow, but converges much faster after several iterations.

Original languageEnglish (US)
Title of host publication35th International Conference on Machine Learning, ICML 2018
EditorsAndreas Krause, Jennifer Dy
PublisherInternational Machine Learning Society (IMLS)
Pages2142-2159
Number of pages18
ISBN (Electronic)9781510867963
StatePublished - 2018
Externally publishedYes
Event35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden
Duration: Jul 10 2018Jul 15 2018

Publication series

Name35th International Conference on Machine Learning, ICML 2018
Volume3

Other

Other35th International Conference on Machine Learning, ICML 2018
Country/TerritorySweden
CityStockholm
Period7/10/187/15/18

All Science Journal Classification (ASJC) codes

  • Computational Theory and Mathematics
  • Human-Computer Interaction
  • Software

Fingerprint

Dive into the research topics of 'Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima'. Together they form a unique fingerprint.

Cite this