TY - GEN
T1 - Strong NP-hardness for sparse optimization with concave penalty functions
AU - Chen, Yichen
AU - Ge, Dongdong
AU - Wang, Mengdi
AU - Wang, Zizhuo
AU - Ye, Yinyu
AU - Yin, Hao
N1 - Publisher Copyright:
© 2017 by the author(s).
PY - 2017
Y1 - 2017
N2 - Consider the regularized sparse minimization problem, which involves empirical sums of loss functions for n data points (each of dimension d) and a nonconvex sparsity penalty. We prove that finding an O(nC1dC2)-optimal solution to the regularized sparse optimization problem is strongly NP-hard for any C1, C2 ∈ [0, 1) such that C1 + C2 < 1. The result applies to a broad class of loss functions and sparse penalty functions. It suggests that one cannot even approximately solve the sparse optimization problem in polynomial time, unless P = NP.
AB - Consider the regularized sparse minimization problem, which involves empirical sums of loss functions for n data points (each of dimension d) and a nonconvex sparsity penalty. We prove that finding an O(nC1dC2)-optimal solution to the regularized sparse optimization problem is strongly NP-hard for any C1, C2 ∈ [0, 1) such that C1 + C2 < 1. The result applies to a broad class of loss functions and sparse penalty functions. It suggests that one cannot even approximately solve the sparse optimization problem in polynomial time, unless P = NP.
KW - Computational complexity
KW - Concave penalty
KW - NP-hardness
KW - Nonconvex optimization
KW - Sparsity
UR - http://www.scopus.com/inward/record.url?scp=85048414541&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85048414541&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85048414541
T3 - 34th International Conference on Machine Learning, ICML 2017
SP - 1230
EP - 1251
BT - 34th International Conference on Machine Learning, ICML 2017
PB - International Machine Learning Society (IMLS)
T2 - 34th International Conference on Machine Learning, ICML 2017
Y2 - 6 August 2017 through 11 August 2017
ER -