Abstract
We analyze stochastic gradient descent for optimizing non-convex functions. In many cases for non-convex functions the goal is to find a reasonable local minimum, and the main concern is that gradient updates are trapped in saddle points. In this paper we identify strict saddle property for non-convex problem that allows for efficient optimization. Using this property we show that from an arbitrary starting point, stochastic gradient descent converges to a local minimum in a polynomial number of iterations. To the best of our knowledge this is the first work that gives global convergence guarantees for stochastic gradient descent on non-convex functions with exponentially many local minima and saddle points. Our analysis can be applied to orthogonal tensor decomposition, which is widely used in learning a rich class of latent variable models. We propose a new optimization formulation for the tensor decomposition problem that has strict saddle property. As a result we get the first online algorithm for orthogonal tensor decomposition with global convergence guarantee.
Original language | English (US) |
---|---|
Journal | Journal of Machine Learning Research |
Volume | 40 |
Issue number | 2015 |
State | Published - 2015 |
Externally published | Yes |
Event | 28th Conference on Learning Theory, COLT 2015 - Paris, France Duration: Jul 2 2015 → Jul 6 2015 |
All Science Journal Classification (ASJC) codes
- Software
- Artificial Intelligence
- Control and Systems Engineering
- Statistics and Probability
Keywords
- Non-convex optimization
- Saddle points
- Stochastic gradient
- Tensor decomposition