TY - JOUR
T1 - Revisiting compressed sensing
T2 - exploiting the efficiency of simplex and sparsification methods
AU - Vanderbei, Robert
AU - Lin, Kevin
AU - Liu, Han
AU - Wang, Lie
N1 - Funding Information:
The first author’s research is supported by ONR Award N00014-13-1-0093, the third author’s by NSF Grant III–1116730, and the fourth author’s by NSF Grant DMS-1005539.
Publisher Copyright:
© 2016, Springer-Verlag Berlin Heidelberg and The Mathematical Programming Society.
PY - 2016/9/1
Y1 - 2016/9/1
N2 - We propose two approaches to solve large-scale compressed sensing problems. The first approach uses the parametric simplex method to recover very sparse signals by taking a small number of simplex pivots, while the second approach reformulates the problem using Kronecker products to achieve faster computation via a sparser problem formulation. In particular, we focus on the computational aspects of these methods in compressed sensing. For the first approach, if the true signal is very sparse and we initialize our solution to be the zero vector, then a customized parametric simplex method usually takes a small number of iterations to converge. Our numerical studies show that this approach is 10 times faster than state-of-the-art methods for recovering very sparse signals. The second approach can be used when the sensing matrix is the Kronecker product of two smaller matrices. We show that the best-known sufficient condition for the Kronecker compressed sensing (KCS) strategy to obtain a perfect recovery is more restrictive than the corresponding condition if using the first approach. However, KCS can be formulated as a linear program with a very sparse constraint matrix, whereas the first approach involves a completely dense constraint matrix. Hence, algorithms that benefit from sparse problem representation, such as interior point methods (IPMs), are expected to have computational advantages for the KCS problem. We numerically demonstrate that KCS combined with IPMs is up to 10 times faster than vanilla IPMs and state-of-the-art methods such as ℓ1_ℓs and Mirror Prox regardless of the sparsity level or problem size.
AB - We propose two approaches to solve large-scale compressed sensing problems. The first approach uses the parametric simplex method to recover very sparse signals by taking a small number of simplex pivots, while the second approach reformulates the problem using Kronecker products to achieve faster computation via a sparser problem formulation. In particular, we focus on the computational aspects of these methods in compressed sensing. For the first approach, if the true signal is very sparse and we initialize our solution to be the zero vector, then a customized parametric simplex method usually takes a small number of iterations to converge. Our numerical studies show that this approach is 10 times faster than state-of-the-art methods for recovering very sparse signals. The second approach can be used when the sensing matrix is the Kronecker product of two smaller matrices. We show that the best-known sufficient condition for the Kronecker compressed sensing (KCS) strategy to obtain a perfect recovery is more restrictive than the corresponding condition if using the first approach. However, KCS can be formulated as a linear program with a very sparse constraint matrix, whereas the first approach involves a completely dense constraint matrix. Hence, algorithms that benefit from sparse problem representation, such as interior point methods (IPMs), are expected to have computational advantages for the KCS problem. We numerically demonstrate that KCS combined with IPMs is up to 10 times faster than vanilla IPMs and state-of-the-art methods such as ℓ1_ℓs and Mirror Prox regardless of the sparsity level or problem size.
KW - Compressed sensing
KW - Interior-point methods
KW - Linear programming
KW - Parametric simplex method
KW - Sparse signals
UR - http://www.scopus.com/inward/record.url?scp=84982737849&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84982737849&partnerID=8YFLogxK
U2 - 10.1007/s12532-016-0105-y
DO - 10.1007/s12532-016-0105-y
M3 - Article
AN - SCOPUS:84982737849
SN - 1867-2949
VL - 8
SP - 253
EP - 269
JO - Mathematical Programming Computation
JF - Mathematical Programming Computation
IS - 3
ER -