TY - GEN
T1 - Information theory meets circuit design
T2 - 2011 49th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2011
AU - Grover, Pulkit
AU - Goldsmith, Andrea
AU - Sahai, Anant
AU - Rabaey, Jan
PY - 2011
Y1 - 2011
N2 - It is generally thought that good codes, i.e. codes that operate at rates close to capacity and attain low error probabilities, are sophisticated constructions that require high encoding and decoding circuit power. In this paper, we rigorously show that this intuition is correct by deriving an information-theoretic lower bound on power consumption for encoding circuits using communication-complexity techniques. We first lower bound the "VLSI complexity" - measured as the product A wiresl 2 where A wires is the wire-area and l is the number of clock cycles in implementation - for encoding. Using the lower bound on VLSI complexity, we derive a lower bound on power consumption of any fully-parallel encoding implementation for any code, and show that the consumed power must diverge to infinity as the error probability approaches zero. Further, the speed of this divergence increases as the rate approaches channel capacity. We also provide a refinement of an earlier result on VLSI complexity by El Gamal, Greene and Pang, which derives a lower bound on A chipl 2, where A chip is the entire chip area required for encoding.
AB - It is generally thought that good codes, i.e. codes that operate at rates close to capacity and attain low error probabilities, are sophisticated constructions that require high encoding and decoding circuit power. In this paper, we rigorously show that this intuition is correct by deriving an information-theoretic lower bound on power consumption for encoding circuits using communication-complexity techniques. We first lower bound the "VLSI complexity" - measured as the product A wiresl 2 where A wires is the wire-area and l is the number of clock cycles in implementation - for encoding. Using the lower bound on VLSI complexity, we derive a lower bound on power consumption of any fully-parallel encoding implementation for any code, and show that the consumed power must diverge to infinity as the error probability approaches zero. Further, the speed of this divergence increases as the rate approaches channel capacity. We also provide a refinement of an earlier result on VLSI complexity by El Gamal, Greene and Pang, which derives a lower bound on A chipl 2, where A chip is the entire chip area required for encoding.
UR - http://www.scopus.com/inward/record.url?scp=84856099658&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84856099658&partnerID=8YFLogxK
U2 - 10.1109/Allerton.2011.6120330
DO - 10.1109/Allerton.2011.6120330
M3 - Conference contribution
AN - SCOPUS:84856099658
SN - 9781457718168
T3 - 2011 49th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2011
SP - 1392
EP - 1399
BT - 2011 49th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2011
Y2 - 28 September 2011 through 30 September 2011
ER -