TY - GEN
T1 - Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks
AU - Arora, Sanjeev
AU - Du, Simon S.
AU - Hu, Wei
AU - Li, Zhiyuan
AU - Wang, Ruosong
PY - 2019/1/1
Y1 - 2019/1/1
N2 - Recent works have cast some light on the mystery of why deep nets fit any data and generalize despite being very overparametrized. This paper analyzes training and generalization for a simple 2-layer ReLU net with random initialization, and provides the following improvements over recent works: (i) Using a tighter characterization of training speed than recent papers, an explanation for why training a neural net with random labels leads to slower training, as originally observed in [Zhang et al. ICLR' 17]. (ii) Generalization bound independent of network size, using a data-dependent complexity measure. Our measure distinguishes clearly between random labels and true labels on MNIST and CIFAR, as shown by experiments. Moreover, recent papers require sample complexity to increase (slowly) with the size, while our sample complexity is completely independent of the network size. (iii) Leamability of a broad class of smooth functions by 2-laycr ReLU nets trained via gradient descent. The key idea is to track dynamics of training and generalization via properties of a related kernel.
AB - Recent works have cast some light on the mystery of why deep nets fit any data and generalize despite being very overparametrized. This paper analyzes training and generalization for a simple 2-layer ReLU net with random initialization, and provides the following improvements over recent works: (i) Using a tighter characterization of training speed than recent papers, an explanation for why training a neural net with random labels leads to slower training, as originally observed in [Zhang et al. ICLR' 17]. (ii) Generalization bound independent of network size, using a data-dependent complexity measure. Our measure distinguishes clearly between random labels and true labels on MNIST and CIFAR, as shown by experiments. Moreover, recent papers require sample complexity to increase (slowly) with the size, while our sample complexity is completely independent of the network size. (iii) Leamability of a broad class of smooth functions by 2-laycr ReLU nets trained via gradient descent. The key idea is to track dynamics of training and generalization via properties of a related kernel.
UR - http://www.scopus.com/inward/record.url?scp=85077993258&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85077993258&partnerID=8YFLogxK
M3 - Conference contribution
T3 - 36th International Conference on Machine Learning, ICML 2019
SP - 477
EP - 502
BT - 36th International Conference on Machine Learning, ICML 2019
PB - International Machine Learning Society (IMLS)
T2 - 36th International Conference on Machine Learning, ICML 2019
Y2 - 9 June 2019 through 15 June 2019
ER -