TY - JOUR
T1 - A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
AU - Zhou, Mo
AU - Ge, Rong
AU - Jin, Chi
N1 - Funding Information:
Rong Ge and Mo Zhou are supported in part by NSF-Simons Research Collaborations on the Mathematical and Scientific Foundations of Deep Learning (THEORINET), NSF Award CCF-1704656, CCF-1845171 (CAREER), CCF-1934964 (Tripods), a Sloan Research Fellowship, and a Google Faculty Research Award. Part of the work was done when Rong Ge and Chi Jin were visiting Instituted for Advanced Studies for “Special Year on Optimization, Statistics, and Theoretical Machine Learning” program.
Publisher Copyright:
© 2021 M. Zhou, R. Ge & C. Jin.
PY - 2021
Y1 - 2021
N2 - While over-parameterization is widely believed to be crucial for the success of optimization for the neural networks, most existing theories on over-parameterization do not fully explain the reason—they either work in the Neural Tangent Kernel regime where neurons don’t move much, or require an enormous number of neurons. In practice, when the data is generated using a teacher neural network, even mildly over-parameterized neural networks can achieve 0 loss and recover the directions of teacher neurons. In this paper we develop a local convergence theory for mildly over-parameterized two-layer neural net. We show that as long as the loss is already lower than a threshold (polynomial in relevant parameters), all student neurons in an over-parameterized two-layer neural network will converge to one of teacher neurons, and the loss will go to 0. Our result holds for any number of student neurons as long as it is at least as large as the number of teacher neurons, and our convergence rate is independent of the number of student neurons. A key component of our analysis is the new characterization of local optimization landscape—we show the gradient satisfies a special case of Lojasiewicz property which is different from local strong convexity or PL conditions used in previous work.
AB - While over-parameterization is widely believed to be crucial for the success of optimization for the neural networks, most existing theories on over-parameterization do not fully explain the reason—they either work in the Neural Tangent Kernel regime where neurons don’t move much, or require an enormous number of neurons. In practice, when the data is generated using a teacher neural network, even mildly over-parameterized neural networks can achieve 0 loss and recover the directions of teacher neurons. In this paper we develop a local convergence theory for mildly over-parameterized two-layer neural net. We show that as long as the loss is already lower than a threshold (polynomial in relevant parameters), all student neurons in an over-parameterized two-layer neural network will converge to one of teacher neurons, and the loss will go to 0. Our result holds for any number of student neurons as long as it is at least as large as the number of teacher neurons, and our convergence rate is independent of the number of student neurons. A key component of our analysis is the new characterization of local optimization landscape—we show the gradient satisfies a special case of Lojasiewicz property which is different from local strong convexity or PL conditions used in previous work.
UR - http://www.scopus.com/inward/record.url?scp=85162688445&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85162688445&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85162688445
SN - 2640-3498
VL - 134
SP - 4577
EP - 4632
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 34th Conference on Learning Theory, COLT 2021
Y2 - 15 August 2021 through 19 August 2021
ER -