TY - JOUR
T1 - DeepCode
T2 - 32nd Conference on Neural Information Processing Systems, NeurIPS 2018
AU - Kim, Hyeji
AU - Jiang, Yihan
AU - Kannan, Sreeram
AU - Oh, Sewoong
AU - Viswanath, Pramod
N1 - Funding Information:
We thank Shrinivas Kudekar and Saurabh Tavildar for helpful discussions and providing references to the state-of-the-art feedforward codes. We thank Dina Katabi for a detailed discussion that prompted work on system implementation. This work is in part supported by National Science Foundation awards CCF-1553452 and RI-1815535, Army Research Office under grant number W911NF-18-1-0384, and Amazon Catalyst award. Y. Jiang and S. Kannan would also like to acknowledge NSF awards 1651236 and 1703403.
Publisher Copyright:
© 2018 Curran Associates Inc.All rights reserved.
PY - 2018
Y1 - 2018
N2 - The design of codes for communicating reliably over a statistically well defined channel is an important endeavor involving deep mathematical research and wide-ranging practical applications. In this work, we present the first family of codes obtained via deep learning, which significantly beats state-of-the-art codes designed over several decades of research. The communication channel under consideration is the Gaussian noise channel with feedback, whose study was initiated by Shannon; feedback is known theoretically to improve reliability of communication, but no practical codes that do so have ever been successfully constructed. We break this logjam by integrating information theoretic insights harmoniously with recurrent-neural-network based encoders and decoders to create novel codes that outperform known codes by 3 orders of magnitude in reliability. We also demonstrate several desirable properties in the codes: (a) generalization to larger block lengths; (b) composability with known codes; (c) adaptation to practical constraints. This result also presents broader ramifications to coding theory: even when the channel has a clear mathematical model, deep learning methodologies, when combined with channel-specific information-theoretic insights, can potentially beat state-of-the-art codes, constructed over decades of mathematical research.
AB - The design of codes for communicating reliably over a statistically well defined channel is an important endeavor involving deep mathematical research and wide-ranging practical applications. In this work, we present the first family of codes obtained via deep learning, which significantly beats state-of-the-art codes designed over several decades of research. The communication channel under consideration is the Gaussian noise channel with feedback, whose study was initiated by Shannon; feedback is known theoretically to improve reliability of communication, but no practical codes that do so have ever been successfully constructed. We break this logjam by integrating information theoretic insights harmoniously with recurrent-neural-network based encoders and decoders to create novel codes that outperform known codes by 3 orders of magnitude in reliability. We also demonstrate several desirable properties in the codes: (a) generalization to larger block lengths; (b) composability with known codes; (c) adaptation to practical constraints. This result also presents broader ramifications to coding theory: even when the channel has a clear mathematical model, deep learning methodologies, when combined with channel-specific information-theoretic insights, can potentially beat state-of-the-art codes, constructed over decades of mathematical research.
UR - http://www.scopus.com/inward/record.url?scp=85064808608&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85064808608&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85064808608
SN - 1049-5258
VL - 2018-December
SP - 9436
EP - 9446
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
Y2 - 2 December 2018 through 8 December 2018
ER -