TY - GEN

T1 - Message-passing algorithms and improved LP decoding

AU - Arora, Sanjeev

AU - Daskalakis, Constantinos

AU - Steurer, David

PY - 2009

Y1 - 2009

N2 - Linear programming decoding for low-density parity check codes (and related domains such as compressed sensing) has received increased attention over recent years because of its practical performance |coming close to that of iterative decoding algorithms| and its amenability to finite-blocklength analysis. Several works starting with the work of Feldman et al. showed how to analyze LP decoding using properties of expander graphs. This line of analysis works for only low error rates, about a couple of orders of magnitude lower than the empirically observed performance. It is possible to do better for the case of random noise, as shown by Daskalakis et al. and Koetter and Vontobel. Building on work of Koetter and Vontobel, we obtain a novel understanding of LP decoding, which allows us to establish a 0.05-fraction of correctable errors for rate-1/2 codes; this comes very close to the performance of iterative decoders and is signi cantly higher than the best previously noted correctable bit error rate for LP decoding. Unlike other techniques, our analysis directly works with the primal linear program and exploits an explicit connection between LP decoding and message passing algorithms. An interesting byproduct of our method is a notion of a "locally optimal" solution that we show to always be globally optimal (i.e., it is the nearest codeword). Such a solution can in fact be found in near-linear time by a "re-weighted" version of the min-sum algorithm, obviating the need for linear programming. Our analysis implies, in particular, that this re-weighted version of the min-sum decoder corrects up to a 0:05-fraction of errors.

AB - Linear programming decoding for low-density parity check codes (and related domains such as compressed sensing) has received increased attention over recent years because of its practical performance |coming close to that of iterative decoding algorithms| and its amenability to finite-blocklength analysis. Several works starting with the work of Feldman et al. showed how to analyze LP decoding using properties of expander graphs. This line of analysis works for only low error rates, about a couple of orders of magnitude lower than the empirically observed performance. It is possible to do better for the case of random noise, as shown by Daskalakis et al. and Koetter and Vontobel. Building on work of Koetter and Vontobel, we obtain a novel understanding of LP decoding, which allows us to establish a 0.05-fraction of correctable errors for rate-1/2 codes; this comes very close to the performance of iterative decoders and is signi cantly higher than the best previously noted correctable bit error rate for LP decoding. Unlike other techniques, our analysis directly works with the primal linear program and exploits an explicit connection between LP decoding and message passing algorithms. An interesting byproduct of our method is a notion of a "locally optimal" solution that we show to always be globally optimal (i.e., it is the nearest codeword). Such a solution can in fact be found in near-linear time by a "re-weighted" version of the min-sum algorithm, obviating the need for linear programming. Our analysis implies, in particular, that this re-weighted version of the min-sum decoder corrects up to a 0:05-fraction of errors.

KW - LDPC codes

KW - LP decoding

KW - Messagepassing algorithms

KW - Min-sum algorithm

UR - http://www.scopus.com/inward/record.url?scp=70350700914&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=70350700914&partnerID=8YFLogxK

U2 - 10.1145/1536414.1536418

DO - 10.1145/1536414.1536418

M3 - Conference contribution

AN - SCOPUS:70350700914

SN - 9781605585062

T3 - Proceedings of the Annual ACM Symposium on Theory of Computing

SP - 3

EP - 12

BT - STOC'09 - Proceedings of the 2009 ACM International Symposium on Theory of Computing

T2 - 41st Annual ACM Symposium on Theory of Computing, STOC '09

Y2 - 31 May 2009 through 2 June 2009

ER -