Implicit regularization in nonconvex statistical estimation: Gradient descent converges linearly for phase retrieval and matrix completion

Cong Ma, Kaizheng Wang, Yuejie Chi, Yuxin Chen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Scopus citations

Abstract

Recent years have seen a flurry of activities in designing provably efficient nonconvex optimization procedures for solving statistical estimation problems. For various problems like phase retrieval or low-rank matrix completion, state-of-the-art non- convex procedures require proper regularization (e.g. trimming, regularized cost, projection) in order to guarantee fast convergence. When it comes to vanilla procedures such as gradient descent, however, prior theory either recommends highly conservative learning rates to avoid overshooting, or completely lacks performance guarantees. This paper uncovers a striking phenomenon in several nonconvex problems: even in the absence of explicit regularization, gradient descent follows a trajectory staying within a basin that enjoys nice geometry, consisting of points incoherent with the sampling mechanism. This "implicit regularization" feature allows gradient descent to proceed in a far more aggressive fashion without overshooting, which in turn results in substantial compu-tational savings. Focusing on two statistical estimation problems, i.e. solving random quadratic systems of equations and low-rank matrix completion, we establish that gradient descent achieves near-optimal statistical and computational guarantees without explicit regularization. As a byprod-uct, for noisy matrix completion, we demonstrate that gradient descent enables optimal control of both en try wise and spectral-norm errors.

Original languageEnglish (US)
Title of host publication35th International Conference on Machine Learning, ICML 2018
EditorsJennifer Dy, Andreas Krause
PublisherInternational Machine Learning Society (IMLS)
Pages5264-5331
Number of pages68
ISBN (Electronic)9781510867963
StatePublished - Jan 1 2018
Event35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden
Duration: Jul 10 2018Jul 15 2018

Publication series

Name35th International Conference on Machine Learning, ICML 2018
Volume8

Other

Other35th International Conference on Machine Learning, ICML 2018
CountrySweden
CityStockholm
Period7/10/187/15/18

All Science Journal Classification (ASJC) codes

  • Computational Theory and Mathematics
  • Human-Computer Interaction
  • Software

Fingerprint Dive into the research topics of 'Implicit regularization in nonconvex statistical estimation: Gradient descent converges linearly for phase retrieval and matrix completion'. Together they form a unique fingerprint.

  • Cite this

    Ma, C., Wang, K., Chi, Y., & Chen, Y. (2018). Implicit regularization in nonconvex statistical estimation: Gradient descent converges linearly for phase retrieval and matrix completion. In J. Dy, & A. Krause (Eds.), 35th International Conference on Machine Learning, ICML 2018 (pp. 5264-5331). (35th International Conference on Machine Learning, ICML 2018; Vol. 8). International Machine Learning Society (IMLS).