Strong NP-hardness for sparse optimization with concave penalty functions

Yichen Chen, Dongdong Ge, Mengdi Wang, Zizhuo Wang, Yinyu Ye, Hao Yin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Scopus citations

Abstract

Consider the regularized sparse minimization problem, which involves empirical sums of loss functions for n data points (each of dimension d) and a nonconvex sparsity penalty. We prove that finding an O(nC1dC2)-optimal solution to the regularized sparse optimization problem is strongly NP-hard for any C1, C2 ∈ [0, 1) such that C1 + C2 < 1. The result applies to a broad class of loss functions and sparse penalty functions. It suggests that one cannot even approximately solve the sparse optimization problem in polynomial time, unless P = NP.

Original languageEnglish (US)
Title of host publication34th International Conference on Machine Learning, ICML 2017
PublisherInternational Machine Learning Society (IMLS)
Pages1230-1251
Number of pages22
ISBN (Electronic)9781510855144
StatePublished - 2017
Event34th International Conference on Machine Learning, ICML 2017 - Sydney, Australia
Duration: Aug 6 2017Aug 11 2017

Publication series

Name34th International Conference on Machine Learning, ICML 2017
Volume2

Other

Other34th International Conference on Machine Learning, ICML 2017
Country/TerritoryAustralia
CitySydney
Period8/6/178/11/17

All Science Journal Classification (ASJC) codes

  • Computational Theory and Mathematics
  • Human-Computer Interaction
  • Software

Keywords

  • Computational complexity
  • Concave penalty
  • NP-hardness
  • Nonconvex optimization
  • Sparsity

Fingerprint

Dive into the research topics of 'Strong NP-hardness for sparse optimization with concave penalty functions'. Together they form a unique fingerprint.

Cite this