TY - JOUR
T1 - Algorithmic Analysis and Statistical Estimation of SLOPE via Approximate Message Passing
AU - Bu, Zhiqi
AU - Klusowski, Jason M.
AU - Rush, Cynthia
AU - Su, Weijie J.
N1 - Funding Information:
Manuscript received July 17, 2019; revised May 30, 2020; accepted September 3, 2020. Date of publication September 23, 2020; date of current version December 21, 2020. The work of Jason M. Klusowski was supported in part by the NSF DMS under Grant 1915932 and in part by the NSF TRIPODS DATA-INSPIRE CCF under Grant 1934924. The work of Cynthia Rush and Zhiqi Bu was supported in part by the NSF CCF under Grant 1849883. The work of Weijie J. Su was supported in part by the CAREER DMS under Grant 1847415, and in part by the Wharton Dean’s Research Fund. This article was presented in part at the 2019 Conference on Neural Information Processing Systems. (Corresponding author: Cynthia Rush.) Zhiqi Bu is with the Department of Applied Mathematics and Computational Science, University of Pennsylvania, Philadelphia, PA 19104 USA (e-mail: zbu@sas.upenn.edu).
Publisher Copyright:
© 1963-2012 IEEE.
PY - 2021/1
Y1 - 2021/1
N2 - SLOPE is a relatively new convex optimization procedure for high-dimensional linear regression via the sorted $\ell _{1}$ penalty: the larger the rank of the fitted coefficient, the larger the penalty. This non-separable penalty renders many existing techniques invalid or inconclusive in analyzing the SLOPE solution. In this paper, we develop an asymptotically exact characterization of the SLOPE solution under Gaussian random designs through solving the SLOPE problem using approximate message passing (AMP). This algorithmic approach allows us to approximate the SLOPE solution via the much more amenable AMP iterates. Explicitly, we characterize the asymptotic dynamics of the AMP iterates relying on a recently developed state evolution analysis for non-separable penalties, thereby overcoming the difficulty caused by the sorted $\ell _{1}$ penalty. Moreover, we prove that the AMP iterates converge to the SLOPE solution in an asymptotic sense, and numerical simulations show that the convergence is surprisingly fast. Our proof rests on a novel technique that specifically leverages the SLOPE problem. In contrast to prior literature, our work not only yields an asymptotically sharp analysis but also offers an algorithmic, flexible, and constructive approach to understanding the SLOPE problem.
AB - SLOPE is a relatively new convex optimization procedure for high-dimensional linear regression via the sorted $\ell _{1}$ penalty: the larger the rank of the fitted coefficient, the larger the penalty. This non-separable penalty renders many existing techniques invalid or inconclusive in analyzing the SLOPE solution. In this paper, we develop an asymptotically exact characterization of the SLOPE solution under Gaussian random designs through solving the SLOPE problem using approximate message passing (AMP). This algorithmic approach allows us to approximate the SLOPE solution via the much more amenable AMP iterates. Explicitly, we characterize the asymptotic dynamics of the AMP iterates relying on a recently developed state evolution analysis for non-separable penalties, thereby overcoming the difficulty caused by the sorted $\ell _{1}$ penalty. Moreover, we prove that the AMP iterates converge to the SLOPE solution in an asymptotic sense, and numerical simulations show that the convergence is surprisingly fast. Our proof rests on a novel technique that specifically leverages the SLOPE problem. In contrast to prior literature, our work not only yields an asymptotically sharp analysis but also offers an algorithmic, flexible, and constructive approach to understanding the SLOPE problem.
KW - Approximate message passing (AMP)
KW - high-dimensional regression
KW - sorted ℓ₁ regression
KW - state evolution
UR - http://www.scopus.com/inward/record.url?scp=85098569572&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85098569572&partnerID=8YFLogxK
U2 - 10.1109/TIT.2020.3025272
DO - 10.1109/TIT.2020.3025272
M3 - Article
AN - SCOPUS:85098569572
SN - 0018-9448
VL - 67
SP - 506
EP - 537
JO - IRE Professional Group on Information Theory
JF - IRE Professional Group on Information Theory
IS - 1
M1 - 9204751
ER -