TY - JOUR
T1 - Learning to Decode Protograph LDPC Codes
AU - Dai, Jincheng
AU - Tan, Kailin
AU - Si, Zhongwei
AU - Niu, Kai
AU - Chen, Mingzhe
AU - Vincent Poor, H.
AU - Cui, Shuguang
N1 - Funding Information:
Manuscript received July 16, 2020; revised November 17, 2020; accepted February 8, 2021. Date of publication May 10, 2021; date of current version June 17, 2021. This work was supported in part by the National Natural Science Foundation of China under Grant 92067202, Grant 62001049, Grant 62071058, and Grant 61971062; in part by the National Key Research and Development Program of China under Grant 2018YFE0205501 and Grant 2018YFB1800800; in part by the China Post-Doctoral Science Foundation under Grant 2019M660032; in part by Qualcomm Inc.; in part by the U.S. National Science Foundation under Grant CCF-1908308; in part by the Key Area Research and Development Program of Guangdong Province under Grant 2018B030338001; in part by the Shenzhen Outstanding Talents Training Fund; and in part by the Guangdong Research Project under Grant 2017ZT07X152. (Corresponding authors: Jincheng Dai; Kai Niu.) Jincheng Dai, Kailin Tan, Zhongwei Si, and Kai Niu are with the Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China (e-mail: daijincheng@bupt.edu.cn; tankailin@bupt.edu.cn; sizhongwei@bupt.edu.cn; niukai@bupt.edu.cn).
Publisher Copyright:
© 1983-2012 IEEE.
PY - 2021/7
Y1 - 2021/7
N2 - The recent development of deep learning methods provides a new approach to optimize the belief propagation (BP) decoding of linear codes. However, the limitation of existing works is that the scale of neural networks increases rapidly with the codelength, thus they can only support short to moderate codelengths. From the point view of practicality, we propose a high-performance neural min-sum (MS) decoding method that makes full use of the lifting structure of protograph low-density parity-check (LDPC) codes. By this means, the size of the parameter array of each layer in the neural decoder only equals the number of edge-types for arbitrary codelengths. In particular, for protograph LDPC codes, the proposed neural MS decoder is constructed in a special way such that identical parameters are shared by a bundle of edges derived from the same edge-type. To reduce the complexity and overcome the vanishing gradient problem in training the proposed neural MS decoder, an iteration-by-iteration (i.e., layer-by-layer in neural networks) greedy training method is proposed. With this, the proposed neural MS decoder tends to be optimized with faster convergence, which is aligned with the early termination mechanism widely used in practice. To further enhance the generalization ability of the proposed neural MS decoder, a codelength/rate compatible training method is proposed, which randomly selects samples from a set of codes lifted from the same base code. As a theoretical performance evaluation tool, a trajectory-based extrinsic information transfer (T-EXIT) chart is developed for various decoders. Both T-EXIT and simulation results show that the optimized MS decoding can provide faster convergence and up to 1dB gain compared with the plain MS decoding and its variants with only slightly increased complexity. In addition, it can even outperform the sum-product algorithm for some short codes.
AB - The recent development of deep learning methods provides a new approach to optimize the belief propagation (BP) decoding of linear codes. However, the limitation of existing works is that the scale of neural networks increases rapidly with the codelength, thus they can only support short to moderate codelengths. From the point view of practicality, we propose a high-performance neural min-sum (MS) decoding method that makes full use of the lifting structure of protograph low-density parity-check (LDPC) codes. By this means, the size of the parameter array of each layer in the neural decoder only equals the number of edge-types for arbitrary codelengths. In particular, for protograph LDPC codes, the proposed neural MS decoder is constructed in a special way such that identical parameters are shared by a bundle of edges derived from the same edge-type. To reduce the complexity and overcome the vanishing gradient problem in training the proposed neural MS decoder, an iteration-by-iteration (i.e., layer-by-layer in neural networks) greedy training method is proposed. With this, the proposed neural MS decoder tends to be optimized with faster convergence, which is aligned with the early termination mechanism widely used in practice. To further enhance the generalization ability of the proposed neural MS decoder, a codelength/rate compatible training method is proposed, which randomly selects samples from a set of codes lifted from the same base code. As a theoretical performance evaluation tool, a trajectory-based extrinsic information transfer (T-EXIT) chart is developed for various decoders. Both T-EXIT and simulation results show that the optimized MS decoding can provide faster convergence and up to 1dB gain compared with the plain MS decoding and its variants with only slightly increased complexity. In addition, it can even outperform the sum-product algorithm for some short codes.
KW - 5G
KW - Protograph LDPC codes
KW - iteration-by-iteration training
KW - neural min-sum decoder
KW - parameter-sharing
UR - http://www.scopus.com/inward/record.url?scp=85105853239&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85105853239&partnerID=8YFLogxK
U2 - 10.1109/JSAC.2021.3078488
DO - 10.1109/JSAC.2021.3078488
M3 - Article
AN - SCOPUS:85105853239
SN - 0733-8716
VL - 39
SP - 1983
EP - 1999
JO - IEEE Journal on Selected Areas in Communications
JF - IEEE Journal on Selected Areas in Communications
IS - 7
M1 - 9427170
ER -