Smoothing the Landscape Boosts the Signal for SGD Optimal Sample Complexity for Learning Single Index Models

Alex Damian, Eshaan Nichani, Rong Ge, Jason D. Lee

Research output: Contribution to journalConference articlepeer-review

5 Scopus citations

Abstract

We focus on the task of learning a single index model σ(w* · x) with respect to the isotropic Gaussian distribution in d dimensions. Prior work has shown that the sample complexity of learning w* is governed by the information exponent k* of the link function σ, which is defined as the index of the first nonzero Hermite coefficient of σ. Ben Arous et al. [1] showed that n ≳ dk*−1 samples suffice for learning w* and that this is tight for online SGD. However, the CSQ lower bound for gradient based methods only shows that n ≳ dk*/2 samples are necessary. In this work, we close the gap between the upper and lower bounds by showing that online SGD on a smoothed loss learns w* with n ≳ dk*/2 samples. We also draw connections to statistical analyses of tensor PCA and to the implicit regularization effects of minibatch SGD on empirical losses.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume36
StatePublished - 2023
Event37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States
Duration: Dec 10 2023Dec 16 2023

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Smoothing the Landscape Boosts the Signal for SGD Optimal Sample Complexity for Learning Single Index Models'. Together they form a unique fingerprint.

Cite this