Finding approximate local minima faster than gradient descent

Naman Agarwal, Zeyuan Allen-Zhu, Brian Bullins, Elad Hazan, Tengyu Ma

Research output: Chapter in Book/Report/Conference proceedingConference contribution

133 Scopus citations

Abstract

We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples. The time complexity of our algorithm to find an approximate local minimum is even faster than that of gradient descent to find a critical point. Our algorithm applies to a general class of optimization problems including training a neural network and other non-convex objectives arising in machine learning.

Original languageEnglish (US)
Title of host publicationSTOC 2017 - Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing
EditorsPierre McKenzie, Valerie King, Hamed Hatami
PublisherAssociation for Computing Machinery
Pages1195-1199
Number of pages5
ISBN (Electronic)9781450345286
DOIs
StatePublished - Jun 19 2017
Event49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017 - Montreal, Canada
Duration: Jun 19 2017Jun 23 2017

Publication series

NameProceedings of the Annual ACM Symposium on Theory of Computing
VolumePart F128415
ISSN (Print)0737-8017

Other

Other49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017
Country/TerritoryCanada
CityMontreal
Period6/19/176/23/17

All Science Journal Classification (ASJC) codes

  • Software

Keywords

  • Cubic regularization
  • Deep learning
  • Non-convex optimization
  • Second-order optimization

Fingerprint

Dive into the research topics of 'Finding approximate local minima faster than gradient descent'. Together they form a unique fingerprint.

Cite this