Adaptive randomized dimension reduction on massive data

Gregory Darnell, Stoyan Georgiev, Sayan Mukherjee, Barbara E. Engelhardt

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

The scalability of statistical estimators is of increasing importance in modern applications. One approach to implementing scalable algorithms is to compress data into a low dimensional latent space using dimension reduction methods. In this paper, we develop an approach for dimension reduction that exploits the assumption of low rank structure in high dimensional data to gain both computational and statistical advantages. We adapt recent randomized low-rank approximation algorithms to provide an efficient solution to principal component analysis (PCA), and we use this efficient solver to improve estimation in large-scale linear mixed models (LMM) for association mapping in statistical genomics. A key observation in this paper is that randomization serves a dual role, improving both computational and statistical performance by implicitly regularizing the covariance matrix estimate of the random effect in an LMM. These statistical and computational advantages are highlighted in our experiments on simulated data and large-scale genomic studies.

Original languageEnglish (US)
JournalJournal of Machine Learning Research
Volume18
StatePublished - Nov 1 2017

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence
  • Control and Systems Engineering
  • Statistics and Probability

Keywords

  • Dimension reduction
  • Generalized eigendecompositon
  • Genomics
  • Krylov subspace methods
  • Linear mixed models
  • Low-rank
  • Random projections
  • Randomized algorithms
  • Supervised

Fingerprint

Dive into the research topics of 'Adaptive randomized dimension reduction on massive data'. Together they form a unique fingerprint.

Cite this