TY - JOUR
T1 - Kernel and Rich Regimes in Overparametrized Models
AU - Woodworth, Blake
AU - Gunasekar, Suriya
AU - Lee, Jason D.
AU - Moroshko, Edward
AU - Savarese, Pedro
AU - Golan, Itay
AU - Soudry, Daniel
AU - Srebro, Nathan
N1 - Funding Information:
This work was supported by NSF Grant 1764032. BW is supported by a Google PhD Research Fellowship. DS was supported by the Israel Science Foundation (grant No. 31/1031). This work was partially done while the authors were visiting the Simons Institute for the Theory of Computing.
Publisher Copyright:
© 2020 B. Woodworth, S. Gunasekar, J.D. Lee, E. Moroshko, P. Savarese, I. Golan, D. Soudry & N. Srebro.
PY - 2020
Y1 - 2020
N2 - A recent line of work studies overparametrized neural networks in the “kernel regime,” i.e., when during training the network behaves as a kernelized linear predictor, and thus, training with gradient descent has the effect of finding the corresponding minimum RKHS norm solution. This stands in contrast to other studies which demonstrate how gradient descent on overparametrized networks can induce rich implicit biases that are not RKHS norms. Building on an observation by Chizat et al. (2019), we show how the scale of the initialization controls the transition between the “kernel” (aka lazy) and “rich” (aka active) regimes and affects generalization properties in multilayer homogeneous models. We provide a complete and detailed analysis for a family of simple depth-D linear networks that exhibit an interesting and meaningful transition between the kernel and rich regimes, and highlight an interesting role for the width of the models. We further demonstrate this transition empirically for matrix factorization and multilayer non-linear networks.
AB - A recent line of work studies overparametrized neural networks in the “kernel regime,” i.e., when during training the network behaves as a kernelized linear predictor, and thus, training with gradient descent has the effect of finding the corresponding minimum RKHS norm solution. This stands in contrast to other studies which demonstrate how gradient descent on overparametrized networks can induce rich implicit biases that are not RKHS norms. Building on an observation by Chizat et al. (2019), we show how the scale of the initialization controls the transition between the “kernel” (aka lazy) and “rich” (aka active) regimes and affects generalization properties in multilayer homogeneous models. We provide a complete and detailed analysis for a family of simple depth-D linear networks that exhibit an interesting and meaningful transition between the kernel and rich regimes, and highlight an interesting role for the width of the models. We further demonstrate this transition empirically for matrix factorization and multilayer non-linear networks.
UR - http://www.scopus.com/inward/record.url?scp=85161296509&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85161296509&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85161296509
SN - 2640-3498
VL - 125
SP - 3635
EP - 3673
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 33rd Conference on Learning Theory, COLT 2020
Y2 - 9 July 2020 through 12 July 2020
ER -