TY - GEN

T1 - Uniform sampling for matrix approximation

AU - Cohen, Michael B.

AU - Lee, Yin Tat

AU - Musco, Cameron

AU - Musco, Christopher

AU - Peng, Richard

AU - Sidford, Aaron

N1 - Copyright:
Copyright 2015 Elsevier B.V., All rights reserved.

PY - 2015/1/11

Y1 - 2015/1/11

N2 - Random sampling has become a critical tool in solving massive matrix problems. For linear regression, a small, manageable set of data rows can be randomly selected to approximate a tall, skinny data matrix, improving processing time significantly. For theoretical performance guarantees, each row must be sampled with probability proportional to its statistical leverage score. Unfortunately, leverage scores are difficult to compute. A simple alternative is to sample rows uniformly at random. While this often works, uniform sampling will eliminate critical row information for many natural instances. We take a fresh look at uniform sampling by examining what information it does preserve. Specifically, we show that uniform sampling yields a matrix that, in some sense, well approximates a large fraction of the original. While this weak form of approximation is not enough for solving linear regression directly, it is enough to compute a better approximation. This observation leads to simple iterative row sampling algorithms for matrix approximation that run in input-sparsity time and preserve row structure and sparsity at all intermediate steps. In addition to an improved understanding of uniform sampling, our main proof introduces a structural result of independent interest: we show that every matrix can be made to have low coherence by reweighting a small subset of its rows.

AB - Random sampling has become a critical tool in solving massive matrix problems. For linear regression, a small, manageable set of data rows can be randomly selected to approximate a tall, skinny data matrix, improving processing time significantly. For theoretical performance guarantees, each row must be sampled with probability proportional to its statistical leverage score. Unfortunately, leverage scores are difficult to compute. A simple alternative is to sample rows uniformly at random. While this often works, uniform sampling will eliminate critical row information for many natural instances. We take a fresh look at uniform sampling by examining what information it does preserve. Specifically, we show that uniform sampling yields a matrix that, in some sense, well approximates a large fraction of the original. While this weak form of approximation is not enough for solving linear regression directly, it is enough to compute a better approximation. This observation leads to simple iterative row sampling algorithms for matrix approximation that run in input-sparsity time and preserve row structure and sparsity at all intermediate steps. In addition to an improved understanding of uniform sampling, our main proof introduces a structural result of independent interest: we show that every matrix can be made to have low coherence by reweighting a small subset of its rows.

KW - Leverage scores

KW - Matrix sampling

KW - Randomized numerical linear algebra

KW - Regression

UR - http://www.scopus.com/inward/record.url?scp=84922209704&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84922209704&partnerID=8YFLogxK

U2 - 10.1145/2688073.2688113

DO - 10.1145/2688073.2688113

M3 - Conference contribution

AN - SCOPUS:84922209704

T3 - ITCS 2015 - Proceedings of the 6th Innovations in Theoretical Computer Science

SP - 181

EP - 190

BT - ITCS 2015 - Proceedings of the 6th Innovations in Theoretical Computer Science

PB - Association for Computing Machinery, Inc

T2 - 6th Conference on Innovations in Theoretical Computer Science, ITCS 2015

Y2 - 11 January 2015 through 13 January 2015

ER -