Escaping from saddle points: Online stochastic gradient for tensor decomposition

Rong Ge, Furong Huang, Chi Jin, Yang Yuan

Research output: Contribution to journalConference articlepeer-review

353 Scopus citations

Abstract

We analyze stochastic gradient descent for optimizing non-convex functions. In many cases for non-convex functions the goal is to find a reasonable local minimum, and the main concern is that gradient updates are trapped in saddle points. In this paper we identify strict saddle property for non-convex problem that allows for efficient optimization. Using this property we show that from an arbitrary starting point, stochastic gradient descent converges to a local minimum in a polynomial number of iterations. To the best of our knowledge this is the first work that gives global convergence guarantees for stochastic gradient descent on non-convex functions with exponentially many local minima and saddle points. Our analysis can be applied to orthogonal tensor decomposition, which is widely used in learning a rich class of latent variable models. We propose a new optimization formulation for the tensor decomposition problem that has strict saddle property. As a result we get the first online algorithm for orthogonal tensor decomposition with global convergence guarantee.

Original languageEnglish (US)
JournalJournal of Machine Learning Research
Volume40
Issue number2015
StatePublished - 2015
Externally publishedYes
Event28th Conference on Learning Theory, COLT 2015 - Paris, France
Duration: Jul 2 2015Jul 6 2015

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence
  • Control and Systems Engineering
  • Statistics and Probability

Keywords

  • Non-convex optimization
  • Saddle points
  • Stochastic gradient
  • Tensor decomposition

Fingerprint

Dive into the research topics of 'Escaping from saddle points: Online stochastic gradient for tensor decomposition'. Together they form a unique fingerprint.

Cite this