## Abstract

In this paper, we consider three typical optimization problems with a convex loss function and a nonconvex sparse penalty or constraint. For the sparse penalized problem, we prove that finding an O(nc1dc2)-optimal solution to an n × d problem is strongly NP-hard for any c1; c2 2 [0; 1) such that c1 + c2 < 1. For two constrained versions of the sparse optimization problem, we show that it is intractable to approximately compute a solution path associated with increasing values of some tuning parameter. The hardness results apply to a broad class of loss functions and sparse penalties. They suggest that one cannot even approximately solve these three problems in polynomial time, unless P = NP.

Original language | English (US) |
---|---|

Journal | Journal of Machine Learning Research |

Volume | 20 |

State | Published - Feb 1 2019 |

## All Science Journal Classification (ASJC) codes

- Software
- Control and Systems Engineering
- Statistics and Probability
- Artificial Intelligence

## Keywords

- Computational complexity
- NPhardness
- Nonconvex optimization
- Sparsity
- Variable selection