Abstract
High dimensional statistical problems arise from diverse fields of scientific research and technological development. Variable selection plays a pivotal role in contemporary statistical learning and scientific discoveries. The traditional idea of best subset selection methods, which can be regarded as a specific form of penalized likelihood, is computationally too expensive for many modern statistical applications. Other forms of penalized likelihood methods have been successfully developed over the last decade to cope with high dimensionality. They have been widely applied for simultaneously selecting important variables and estimating their effects in high dimensional statistical inference. In this article, we present a brief account of the recent developments of theory, methods, and implementations for high dimensional variable selection. What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field. The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized. We also review some recent advances in ultra-high dimensional variable selection, with emphasis on independence screening and two-scale methods.
Original language | English (US) |
---|---|
Pages (from-to) | 101-148 |
Number of pages | 48 |
Journal | Statistica Sinica |
Volume | 20 |
Issue number | 1 |
State | Published - Jan 2010 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Statistics and Probability
- Statistics, Probability and Uncertainty
Keywords
- Dimensionality reduction
- Folded-concave penalty
- High dimensionality
- LASSO
- Model selection
- Oracle property
- Penalized least squares
- Penalized likelihood
- SCAD
- Sure independence screening
- Sure screening
- Variable selection