Abstract
Regularized M-estimators are used in diverse areas of science and engineering to fit high-dimensional models with some low-dimensional structure. Usually the low-dimensional structure is encoded by the presence of the (unknown) parameters in some low-dimensional model subspace. In such settings, it is desirable for estimates of the model parameters to be model selection consistent: the estimates also fall in the model subspace. We develop a general framework for establishing consistency and model selection consistency of regularized M-estimators and show how it applies to some special cases of interest in statistical learning. Our analysis identifies two key properties of regularized M-estimators, referred to as geometric decomposability and irrepresentability, that ensure the estimators are consistent and model selection consistent.
Original language | English (US) |
---|---|
Pages (from-to) | 608-642 |
Number of pages | 35 |
Journal | Electronic Journal of Statistics |
Volume | 9 |
DOIs | |
State | Published - 2015 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Statistics and Probability
- Statistics, Probability and Uncertainty
Keywords
- Generalized lasso
- Geometrically decomposable penalties
- Group lasso
- Lasso
- Nuclear norm minimization
- Regularized M-estimator