TY - GEN

T1 - Gradient Descent Learns One-hidden-layer CNN

T2 - 35th International Conference on Machine Learning, ICML 2018

AU - Du, Simon S.

AU - Lee, Jason D.

AU - Tian, Yuandong

AU - Poczos, Barnabas

AU - Singh, Aarti

N1 - Funding Information:
This research was partly funded by NSF grant IIS 1563887, AFRL grant FA8750-17-2-0212 DARPA D17AP00001. J.D.L. acknowledges support of the ARO under MURI Award W911NF-11-1-0303. This is part of the collaboration between US DOD, UK MOD and UK Engineering and Physical Research Council (EPSRC) under the Multi- disciplinary University Research Initiative.
Publisher Copyright:
© 2018 35th International Conference on Machine Learning, ICML 2018. All rights reserved.

PY - 2018

Y1 - 2018

N2 - We consider the problem of learning a one-hidden- layer neural network with non-overlapping con- volutional layer and ReLU activation, i.e., /(Z, w, a) = aJa(wTZj), in which both the convolutional weights w and the output weights a are parameters to be learned. When the labels are the outputs from a teacher network of the same architecture with fixed weights (w∗, a∗), we prove that with Gaussian input Z, there is a spurious local minimizer. Surprisingly, in the presence of the spurious local minimizer, gradient descent with weight normalization from randomly initialized weights can still be proven to recover the true parameters with constant probability, which can be boosted to probability 1 with multiple restarts. Wc also show that with constant probability, the same procedure could also converge to the spurious local minimum, showing that the local minimum plays a non-trivial role in the dynamics of gradient descent. Furthermore, a quantitative analysis shows that the gradient descent dynamics has two phases: it starts off slow, but converges much faster after several iterations.

AB - We consider the problem of learning a one-hidden- layer neural network with non-overlapping con- volutional layer and ReLU activation, i.e., /(Z, w, a) = aJa(wTZj), in which both the convolutional weights w and the output weights a are parameters to be learned. When the labels are the outputs from a teacher network of the same architecture with fixed weights (w∗, a∗), we prove that with Gaussian input Z, there is a spurious local minimizer. Surprisingly, in the presence of the spurious local minimizer, gradient descent with weight normalization from randomly initialized weights can still be proven to recover the true parameters with constant probability, which can be boosted to probability 1 with multiple restarts. Wc also show that with constant probability, the same procedure could also converge to the spurious local minimum, showing that the local minimum plays a non-trivial role in the dynamics of gradient descent. Furthermore, a quantitative analysis shows that the gradient descent dynamics has two phases: it starts off slow, but converges much faster after several iterations.

UR - http://www.scopus.com/inward/record.url?scp=85057274591&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85057274591&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85057274591

T3 - 35th International Conference on Machine Learning, ICML 2018

SP - 2142

EP - 2159

BT - 35th International Conference on Machine Learning, ICML 2018

A2 - Krause, Andreas

A2 - Dy, Jennifer

PB - International Machine Learning Society (IMLS)

Y2 - 10 July 2018 through 15 July 2018

ER -