Abstract
We focus on the task of learning a single index model σ(w* · x) with respect to the isotropic Gaussian distribution in d dimensions. Prior work has shown that the sample complexity of learning w* is governed by the information exponent k* of the link function σ, which is defined as the index of the first nonzero Hermite coefficient of σ. Ben Arous et al. [1] showed that n ≳ dk*−1 samples suffice for learning w* and that this is tight for online SGD. However, the CSQ lower bound for gradient based methods only shows that n ≳ dk*/2 samples are necessary. In this work, we close the gap between the upper and lower bounds by showing that online SGD on a smoothed loss learns w* with n ≳ dk*/2 samples. We also draw connections to statistical analyses of tensor PCA and to the implicit regularization effects of minibatch SGD on empirical losses.
Original language | English (US) |
---|---|
Journal | Advances in Neural Information Processing Systems |
Volume | 36 |
State | Published - 2023 |
Event | 37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States Duration: Dec 10 2023 → Dec 16 2023 |
All Science Journal Classification (ASJC) codes
- Computer Networks and Communications
- Information Systems
- Signal Processing