TY - JOUR
T1 - Channel capacity and state estimation for state-dependent Gaussian channels
AU - Sutivong, Arak
AU - Chiang, Mung
AU - Cover, Thomas M.
AU - Kim, Young Han
N1 - Funding Information:
Manuscript received October 20, 2002; revised January 12, 2005. The work of A. Sutivong, T. M. Cover, and Y.-H. Kim was supported in part by the National Science Foundation under Grants CCR-9973134, CCR-0311633, by MURI under Grant DAAD-19-99-1-0215, and by the Stanford Networking Research Center (SNRC). The work of M. Chiang was supported by the Hertz Foundation Fellowship and a Stanford Graduate Fellowship. The m aterial in this correspondence was presented in part at the IEEE International Symposium on Information Theory and Its Applications 2000, Honolulu, HI, November 2000 and at the IEEE International Symposium on Information Theory, Washington, DC, June 2001.
PY - 2005/4
Y1 - 2005/4
N2 - We formulate a problem of state information transmission over a state-dependent channel with states known at the transmitter. In particular, we solve a problem of minimizing the mean-squared channel state estimation error E∥Sn - Ŝn∥ for a state-dependent additive Gaussian channel Yn = Xn + Sn + Zn with an independent and identically distributed (i.i.d.) Gaussian state sequence Sn = (S1,..., Sn) known at the transmitter and an unknown i.i.d. additive Gaussian noise Zn. We show that a simple technique of direct state amplification (i.e., Xn = αSn), where the transmitter uses its entire power budget to amplify the channel state, yields the minimum mean-squared state estimation error. This same channel can also be used to send additional independent information at the expense of a higher channel state estimation error. We characterize the optimal tradeoff between the rate R of the independent information that can be reliably transmitted and the mean-squared state estimation error D. We show that any optimal (R, D) tradeoff pair can be achieved via a simple power-sharing technique, whereby the transmitter power is appropriately allocated between pure information transmission and state amplification.
AB - We formulate a problem of state information transmission over a state-dependent channel with states known at the transmitter. In particular, we solve a problem of minimizing the mean-squared channel state estimation error E∥Sn - Ŝn∥ for a state-dependent additive Gaussian channel Yn = Xn + Sn + Zn with an independent and identically distributed (i.i.d.) Gaussian state sequence Sn = (S1,..., Sn) known at the transmitter and an unknown i.i.d. additive Gaussian noise Zn. We show that a simple technique of direct state amplification (i.e., Xn = αSn), where the transmitter uses its entire power budget to amplify the channel state, yields the minimum mean-squared state estimation error. This same channel can also be used to send additional independent information at the expense of a higher channel state estimation error. We characterize the optimal tradeoff between the rate R of the independent information that can be reliably transmitted and the mean-squared state estimation error D. We show that any optimal (R, D) tradeoff pair can be achieved via a simple power-sharing technique, whereby the transmitter power is appropriately allocated between pure information transmission and state amplification.
KW - Additive Gaussian noise channels
KW - Channels with state information
KW - Joint source-channel coding
KW - State amplification
KW - State estimation
UR - http://www.scopus.com/inward/record.url?scp=17644378374&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=17644378374&partnerID=8YFLogxK
U2 - 10.1109/TIT.2005.844108
DO - 10.1109/TIT.2005.844108
M3 - Article
AN - SCOPUS:17644378374
SN - 0018-9448
VL - 51
SP - 1486
EP - 1495
JO - IEEE Transactions on Information Theory
JF - IEEE Transactions on Information Theory
IS - 4
ER -