On model selection consistency of regularized M-estimators

Jason D. Lee, Yuekai Sun, Jonathan E. Taylor

Research output: Contribution to journalArticlepeer-review

34 Scopus citations

Abstract

Regularized M-estimators are used in diverse areas of science and engineering to fit high-dimensional models with some low-dimensional structure. Usually the low-dimensional structure is encoded by the presence of the (unknown) parameters in some low-dimensional model subspace. In such settings, it is desirable for estimates of the model parameters to be model selection consistent: the estimates also fall in the model subspace. We develop a general framework for establishing consistency and model selection consistency of regularized M-estimators and show how it applies to some special cases of interest in statistical learning. Our analysis identifies two key properties of regularized M-estimators, referred to as geometric decomposability and irrepresentability, that ensure the estimators are consistent and model selection consistent.

Original languageEnglish (US)
Pages (from-to)608-642
Number of pages35
JournalElectronic Journal of Statistics
Volume9
DOIs
StatePublished - 2015
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Statistics and Probability
  • Statistics, Probability and Uncertainty

Keywords

  • Generalized lasso
  • Geometrically decomposable penalties
  • Group lasso
  • Lasso
  • Nuclear norm minimization
  • Regularized M-estimator

Fingerprint

Dive into the research topics of 'On model selection consistency of regularized M-estimators'. Together they form a unique fingerprint.

Cite this