Abstract
We propose an accelerated path-following iterative shrinkage thresholding algorithm (APISTA) for solving high-dimensional sparse nonconvex learning problems. The main difference between APISTA and the path-following iterative shrinkage thresholding algorithm (PISTA) is that APISTA exploits an additional coordinate descent subroutine to boost the computational performance. Such a modification, though simple, has profound impact: APISTA not only enjoys the same theoretical guarantee as that of PISTA, that is, APISTA attains a linear rate of convergence to a unique sparse local optimum with good statistical properties, but also significantly outperforms PISTA in empirical benchmarks. As an application, we apply APISTA to solve a family of nonconvex optimization problems motivated by estimating sparse semiparametric graphical models. APISTA allows us to obtain new statistical recovery results that do not exist in the existing literature. Thorough numerical results are provided to back up our theory.
Original language | English (US) |
---|---|
Pages (from-to) | 1272-1296 |
Number of pages | 25 |
Journal | Journal of Computational and Graphical Statistics |
Volume | 25 |
Issue number | 4 |
DOIs | |
State | Published - Oct 1 2016 |
All Science Journal Classification (ASJC) codes
- Statistics and Probability
- Discrete Mathematics and Combinatorics
- Statistics, Probability and Uncertainty
Keywords
- Coordinate descent
- High dimensions
- Nonconvex regularization
- Path-following optimization
- Semiparametric graph estimation
- Variable selection