TY - JOUR
T1 - Understanding Implicit Regularization in Over-Parameterized Single Index Model
AU - Fan, Jianqing
AU - Yang, Zhuoran
AU - Yu, Mengxin
N1 - Funding Information:
Research supported by the NSF grant DMS-1662139 and DMS-1712591, the ONR grant N00014-19-1-2120, and the NIH grant 2R01-GM072611-16.
Publisher Copyright:
© 2022 American Statistical Association.
PY - 2022
Y1 - 2022
N2 - In this article, we leverage over-parameterization to design regularization-free algorithms for the high-dimensional single index model and provide theoretical guarantees for the induced implicit regularization phenomenon. Specifically, we study both vector and matrix single index models where the link function is nonlinear and unknown, the signal parameter is either a sparse vector or a low-rank symmetric matrix, and the response variable can be heavy-tailed. To gain a better understanding of the role played by implicit regularization without excess technicality, we assume that the distribution of the covariates is known a priori. For both the vector and matrix settings, we construct an over-parameterized least-squares loss function by employing the score function transform and a robust truncation step designed specifically for heavy-tailed data. We propose to estimate the true parameter by applying regularization-free gradient descent to the loss function. When the initialization is close to the origin and the stepsize is sufficiently small, we prove that the obtained solution achieves minimax optimal statistical rates of convergence in both the vector and matrix cases. In addition, our experimental results support our theoretical findings and also demonstrate that our methods empirically outperform classical methods with explicit regularization in terms of both (Formula presented.) -statistical rate and variable selection consistency. Supplementary materials for this article are available online.
AB - In this article, we leverage over-parameterization to design regularization-free algorithms for the high-dimensional single index model and provide theoretical guarantees for the induced implicit regularization phenomenon. Specifically, we study both vector and matrix single index models where the link function is nonlinear and unknown, the signal parameter is either a sparse vector or a low-rank symmetric matrix, and the response variable can be heavy-tailed. To gain a better understanding of the role played by implicit regularization without excess technicality, we assume that the distribution of the covariates is known a priori. For both the vector and matrix settings, we construct an over-parameterized least-squares loss function by employing the score function transform and a robust truncation step designed specifically for heavy-tailed data. We propose to estimate the true parameter by applying regularization-free gradient descent to the loss function. When the initialization is close to the origin and the stepsize is sufficiently small, we prove that the obtained solution achieves minimax optimal statistical rates of convergence in both the vector and matrix cases. In addition, our experimental results support our theoretical findings and also demonstrate that our methods empirically outperform classical methods with explicit regularization in terms of both (Formula presented.) -statistical rate and variable selection consistency. Supplementary materials for this article are available online.
KW - High-dimensional models
KW - Implicit regularization
KW - Over-parameterization
KW - Single-index models
UR - http://www.scopus.com/inward/record.url?scp=85127259953&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85127259953&partnerID=8YFLogxK
U2 - 10.1080/01621459.2022.2044824
DO - 10.1080/01621459.2022.2044824
M3 - Article
AN - SCOPUS:85127259953
SN - 0162-1459
JO - Journal of the American Statistical Association
JF - Journal of the American Statistical Association
ER -