TY - GEN
T1 - How to escape saddle points efficiently
AU - Jin, Chi
AU - Ge, Rong
AU - Netrapalli, Praneeth
AU - Kakade, Sham M.
AU - Jordan, Michael I.
N1 - Publisher Copyright:
© Copyright 2017 by the author(s).
PY - 2017
Y1 - 2017
N2 - This paper shows that a perturbed form of gradient descent converges to a second-order stationary point in a number iterations which depends only poly-logarithmically on dimension (i.e., it is almost "dimension-free"). The convergence rate of this procedure matches the well-known convergence rate of gradient descent to first-order stationary points, up to log factors. When all saddle points are non-degenerate, all second-order stationary points are local minima, and our result thus shows that perturbed gradient descent can escape saddle points almost for free. Our results can be directly applied to many machine learning applications, including deep learning. As a particular concrete example of such an application, we show that our results can be used directly to establish sharp global convergence rates for matrix factorization. Our results rely on a novel characterization of the geometry around saddle points, which may be of independent interest to the non-convex optimization community.
AB - This paper shows that a perturbed form of gradient descent converges to a second-order stationary point in a number iterations which depends only poly-logarithmically on dimension (i.e., it is almost "dimension-free"). The convergence rate of this procedure matches the well-known convergence rate of gradient descent to first-order stationary points, up to log factors. When all saddle points are non-degenerate, all second-order stationary points are local minima, and our result thus shows that perturbed gradient descent can escape saddle points almost for free. Our results can be directly applied to many machine learning applications, including deep learning. As a particular concrete example of such an application, we show that our results can be used directly to establish sharp global convergence rates for matrix factorization. Our results rely on a novel characterization of the geometry around saddle points, which may be of independent interest to the non-convex optimization community.
UR - http://www.scopus.com/inward/record.url?scp=85041645943&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85041645943&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85041645943
T3 - 34th International Conference on Machine Learning, ICML 2017
SP - 2727
EP - 2752
BT - 34th International Conference on Machine Learning, ICML 2017
PB - International Machine Learning Society (IMLS)
T2 - 34th International Conference on Machine Learning, ICML 2017
Y2 - 6 August 2017 through 11 August 2017
ER -