TY - GEN

T1 - Compressed sensing under optimal quantization

AU - Kipnis, Alon

AU - Reeves, Galen

AU - Eldar, Yonina C.

AU - Goldsmith, Andrea J.

N1 - Publisher Copyright:
© 2017 IEEE.

PY - 2017/8/9

Y1 - 2017/8/9

N2 - We consider the problem of recovering a sparse vector from a quantized or a lossy compressed version of its noisy random linear projections. We characterize the minimal distortion in this recovery as a function of the sampling ratio, the sparsity rate, the noise intensity and the total number of bits in the quantized representation. We first derive a singe-letter expression that can be seen as the indirect distortion-rate function of the sparse source observed through a Gaussian channel whose signal-to-noise ratio is derived from these parameters. Under the replica symmetry postulation, we prove that there exists a quantization scheme that attains this expression in the asymptotic regime of large system dimensions. In addition, we prove a converse demonstrating that the MMSE in estimating any fixed sub-block of the source from the quantized measurements at a fixed number of bits does not exceed this expression as the system dimensions go to infinity. Thus, under these conditions, the expression we derive describes the excess distortion incurred in encoding the source vector from its noisy random linear projections in lieu of the full source information.

AB - We consider the problem of recovering a sparse vector from a quantized or a lossy compressed version of its noisy random linear projections. We characterize the minimal distortion in this recovery as a function of the sampling ratio, the sparsity rate, the noise intensity and the total number of bits in the quantized representation. We first derive a singe-letter expression that can be seen as the indirect distortion-rate function of the sparse source observed through a Gaussian channel whose signal-to-noise ratio is derived from these parameters. Under the replica symmetry postulation, we prove that there exists a quantization scheme that attains this expression in the asymptotic regime of large system dimensions. In addition, we prove a converse demonstrating that the MMSE in estimating any fixed sub-block of the source from the quantized measurements at a fixed number of bits does not exceed this expression as the system dimensions go to infinity. Thus, under these conditions, the expression we derive describes the excess distortion incurred in encoding the source vector from its noisy random linear projections in lieu of the full source information.

UR - http://www.scopus.com/inward/record.url?scp=85034063445&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85034063445&partnerID=8YFLogxK

U2 - 10.1109/ISIT.2017.8006909

DO - 10.1109/ISIT.2017.8006909

M3 - Conference contribution

AN - SCOPUS:85034063445

T3 - IEEE International Symposium on Information Theory - Proceedings

SP - 2148

EP - 2152

BT - 2017 IEEE International Symposium on Information Theory, ISIT 2017

PB - Institute of Electrical and Electronics Engineers Inc.

T2 - 2017 IEEE International Symposium on Information Theory, ISIT 2017

Y2 - 25 June 2017 through 30 June 2017

ER -