Kernel and Rich Regimes in Overparametrized Models

Blake Woodworth, Suriya Gunasekar, Jason D. Lee, Edward Moroshko, Pedro Savarese, Itay Golan, Daniel Soudry, Nathan Srebro

Research output: Contribution to journalConference articlepeer-review

89 Scopus citations

Abstract

A recent line of work studies overparametrized neural networks in the “kernel regime,” i.e., when during training the network behaves as a kernelized linear predictor, and thus, training with gradient descent has the effect of finding the corresponding minimum RKHS norm solution. This stands in contrast to other studies which demonstrate how gradient descent on overparametrized networks can induce rich implicit biases that are not RKHS norms. Building on an observation by Chizat et al. (2019), we show how the scale of the initialization controls the transition between the “kernel” (aka lazy) and “rich” (aka active) regimes and affects generalization properties in multilayer homogeneous models. We provide a complete and detailed analysis for a family of simple depth-D linear networks that exhibit an interesting and meaningful transition between the kernel and rich regimes, and highlight an interesting role for the width of the models. We further demonstrate this transition empirically for matrix factorization and multilayer non-linear networks.

Original languageEnglish (US)
Pages (from-to)3635-3673
Number of pages39
JournalProceedings of Machine Learning Research
Volume125
StatePublished - 2020
Event33rd Conference on Learning Theory, COLT 2020 - Virtual, Online, Austria
Duration: Jul 9 2020Jul 12 2020

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Kernel and Rich Regimes in Overparametrized Models'. Together they form a unique fingerprint.

Cite this