Sampling can be faster than optimization

Yi An Ma, Yuansi Chen, Chi Jin, Nicolas Flammarion, Michael I. Jordan

Research output: Contribution to journalArticlepeer-review

80 Scopus citations

Abstract

Optimization algorithms and Monte Carlo sampling algorithms have provided the computational foundations for the rapid growth in applications of statistical machine learning in recent years. There is, however, limited theoretical understanding of the relationships between these 2 kinds of methodology, and limited understanding of relative strengths and weaknesses. Moreover, existing results have been obtained primarily in the setting of convex functions (for optimization) and log-concave functions (for sampling). In this setting, where local properties determine global properties, optimization algorithms are unsurprisingly more efficient computationally than sampling algorithms. We instead examine a class of nonconvex objective functions that arise in mixture modeling and multistable systems. In this nonconvex setting, we find that the computational complexity of sampling algorithms scales linearly with the model dimension while that of optimization algorithms scales exponentially.

Original languageEnglish (US)
Pages (from-to)20881-20885
Number of pages5
JournalProceedings of the National Academy of Sciences of the United States of America
Volume116
Issue number42
DOIs
StatePublished - Oct 15 2019
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • General

Keywords

  • Computational complexity
  • Langevin Monte Carlo
  • Nonconvex optimization

Fingerprint

Dive into the research topics of 'Sampling can be faster than optimization'. Together they form a unique fingerprint.

Cite this