Beyond lazy training for over-parameterized tensor decomposition

Xiang Wang, Chenwei Wu, Jason D. Lee, Tengyu Ma, Rong Ge

Research output: Contribution to journalConference articlepeer-review

9 Scopus citations

Abstract

Over-parametrization is an important technique in training neural networks. In both theory and practice, training a larger network allows the optimization algorithm to avoid bad local optimal solutions. In this paper we study a closely related tensor decomposition problem: given an l-th order tensor in (Rd)?l of rank r (where r « d), can variants of gradient descent find a rank m decomposition where m > r? We show that in a lazy training regime (similar to the NTK regime for neural networks) one needs at least m = ?(dl-1), while a variant of gradient descent can find an approximate tensor when m = O*(r2.5l log d). Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume2020-December
StatePublished - 2020
Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
Duration: Dec 6 2020Dec 12 2020

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Beyond lazy training for over-parameterized tensor decomposition'. Together they form a unique fingerprint.

Cite this