TY - GEN
T1 - Faster eigenvector computation via shift-and-invert preconditioning
AU - Garber, Dan
AU - Hazan, Elad
AU - Jin, Chi
AU - Kakade, Sham M.
AU - Musco, Cameron
AU - Netrapalli, Praneeth
AU - Sidford, Aaron
N1 - Publisher Copyright:
© 2016 by the author(s).
PY - 2016
Y1 - 2016
N2 - We give faster algorithms and improved sample complexities for the fundamental problem of estimating the top eigenvector. Given an explicit matrix A € Rn×d, we show how to compute an e approximate top eigenvector of ATA in time O (jnnz(A) + • log l/ϵ). Here nnz(A) is the number of nonzeros in A, sr(A) is the stable rank, and gap is the relative eigengap. We also consider an online setting in which, given a stream of i.i.d. samples from a distribution V with covariance matrix E and a vector xq which is an O(gap) approximate top eigenvector for E, we show how to refine xo to an € approximation using O j samples from V. Here v(P) is a natural notion of variance. Combining our algorithm with previous work to initialize xo, we obtain improved sample complexities and runtimes under a variety of assumptions on V. We achieve our results via a robust analysis of the classic shift-and-invert preconditioning method. This technique lets us reduce eigenvector computation to approximately solving a scries of linear systems with fast stochastic gradient methods.
AB - We give faster algorithms and improved sample complexities for the fundamental problem of estimating the top eigenvector. Given an explicit matrix A € Rn×d, we show how to compute an e approximate top eigenvector of ATA in time O (jnnz(A) + • log l/ϵ). Here nnz(A) is the number of nonzeros in A, sr(A) is the stable rank, and gap is the relative eigengap. We also consider an online setting in which, given a stream of i.i.d. samples from a distribution V with covariance matrix E and a vector xq which is an O(gap) approximate top eigenvector for E, we show how to refine xo to an € approximation using O j samples from V. Here v(P) is a natural notion of variance. Combining our algorithm with previous work to initialize xo, we obtain improved sample complexities and runtimes under a variety of assumptions on V. We achieve our results via a robust analysis of the classic shift-and-invert preconditioning method. This technique lets us reduce eigenvector computation to approximately solving a scries of linear systems with fast stochastic gradient methods.
UR - http://www.scopus.com/inward/record.url?scp=84998611056&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84998611056&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84998611056
T3 - 33rd International Conference on Machine Learning, ICML 2016
SP - 3886
EP - 3894
BT - 33rd International Conference on Machine Learning, ICML 2016
A2 - Weinberger, Kilian Q.
A2 - Balcan, Maria Florina
PB - International Machine Learning Society (IMLS)
T2 - 33rd International Conference on Machine Learning, ICML 2016
Y2 - 19 June 2016 through 24 June 2016
ER -