Approximation hardness for a class of sparse optimization problems

Yichen Chen, Yinyu Ye, Mengdi Wang

Research output: Contribution to journalArticlepeer-review

14 Scopus citations


In this paper, we consider three typical optimization problems with a convex loss function and a nonconvex sparse penalty or constraint. For the sparse penalized problem, we prove that finding an O(nc1dc2)-optimal solution to an n × d problem is strongly NP-hard for any c1; c2 2 [0; 1) such that c1 + c2 < 1. For two constrained versions of the sparse optimization problem, we show that it is intractable to approximately compute a solution path associated with increasing values of some tuning parameter. The hardness results apply to a broad class of loss functions and sparse penalties. They suggest that one cannot even approximately solve these three problems in polynomial time, unless P = NP.

Original languageEnglish (US)
JournalJournal of Machine Learning Research
StatePublished - Feb 1 2019

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence
  • Control and Systems Engineering
  • Statistics and Probability


  • Computational complexity
  • NPhardness
  • Nonconvex optimization
  • Sparsity
  • Variable selection


Dive into the research topics of 'Approximation hardness for a class of sparse optimization problems'. Together they form a unique fingerprint.

Cite this