TY - JOUR

T1 - Neural Networks can Learn Representations with Gradient Descent

AU - Damian, Alex

AU - Lee, Jason D.

AU - Soltanolkotabi, Mahdi

N1 - Funding Information:
AD acknowledges support from a NSF Graduate Research Fellowship. JDL and AD acknowledge support of the ARO under MURI Award W911NF-11-1-0304, the Sloan Research Fellowship, NSF CCF 2002272, NSF IIS 2107304, ONR Young Investigator Award, and NSF-CAREER under award #2144994. MS is supported by the Packard Fellowship in Science and Engineering, a Sloan Fellowship in Mathematics, an NSF-CAREER under award #1846369, DARPA Learning with Less Labels (LwLL) and FastNICS programs, and NSF-CIF awards #1813877 and #2008443.
Publisher Copyright:
© 2022 A. Damian, J.D. Lee & M. Soltanolkotabi.

PY - 2022

Y1 - 2022

N2 - Significant theoretical work has established that in specific regimes, neural networks trained by gradient descent behave like kernel methods. However, in practice, it is known that neural networks strongly outperform their associated kernels. In this work, we explain this gap by demonstrating that there is a large class of functions which cannot be efficiently learned by kernel methods but can be easily learned with gradient descent on a two layer neural network outside the kernel regime by learning representations that are relevant to the target task. We also demonstrate that these representations allow for efficient transfer learning, which is impossible in the kernel regime. Specifically, we consider the problem of learning polynomials which depend on only a few relevant directions, i.e. of the form f?(x) = g(Ux) where U : Rd → Rr with d ≫ r. When the degree of f? is p, it is known that n dp samples are necessary to learn f? in the kernel regime. Our primary result is that gradient descent learns a representation of the data which depends only on the directions relevant to f?. This results in an improved sample complexity of n d2 and enables transfer learning with sample complexity independent of d.

AB - Significant theoretical work has established that in specific regimes, neural networks trained by gradient descent behave like kernel methods. However, in practice, it is known that neural networks strongly outperform their associated kernels. In this work, we explain this gap by demonstrating that there is a large class of functions which cannot be efficiently learned by kernel methods but can be easily learned with gradient descent on a two layer neural network outside the kernel regime by learning representations that are relevant to the target task. We also demonstrate that these representations allow for efficient transfer learning, which is impossible in the kernel regime. Specifically, we consider the problem of learning polynomials which depend on only a few relevant directions, i.e. of the form f?(x) = g(Ux) where U : Rd → Rr with d ≫ r. When the degree of f? is p, it is known that n dp samples are necessary to learn f? in the kernel regime. Our primary result is that gradient descent learns a representation of the data which depends only on the directions relevant to f?. This results in an improved sample complexity of n d2 and enables transfer learning with sample complexity independent of d.

KW - gradient descent

KW - kernel

KW - neural network

KW - representation learning

KW - transfer learning

UR - http://www.scopus.com/inward/record.url?scp=85164674865&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85164674865&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85164674865

SN - 2640-3498

VL - 178

SP - 5413

EP - 5452

JO - Proceedings of Machine Learning Research

JF - Proceedings of Machine Learning Research

T2 - 35th Conference on Learning Theory, COLT 2022

Y2 - 2 July 2022 through 5 July 2022

ER -