TY - GEN
T1 - A Memory-Based Reinforcement Learning Approach to Integrated Sensing and Communication
AU - Nikbakht, Homa
AU - Wigger, Michèle
AU - Shamai, Shlomo
AU - Poor, H. Vincent
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - In this paper, we consider a point-to-point integrated sensing and communication (ISAC) system, where a transmitter conveys a message to a receiver over a channel with memory and simultaneously estimates the state of the channel through the backscattered signals from the emitted waveform. Using Massey's concept of directed information for channels with memory, we formulate the capacity-distortion tradeoff for the ISAC problem when sensing is performed in an online fashion. Optimizing the transmit waveform for this system to simultaneously achieve good communication and sensing performance is a complicated task, and thus we propose a deep reinforcement learning (RL) approach to find a solution. The proposed approach enables the agent to optimize the ISAC performance by learning a reward that reflects the difference between the communication gain and the sensing loss. Since the state-space in our RL model is a priori unbounded, we employ deep deterministic policy gradient algorithm (DDPG). Our numerical results suggest a significant performance improvement when one considers unbounded state-space as opposed to a simpler RL problem with reduced state-space. In the extreme case of degenerate state-space only memoryless signaling strategies are possible. Our results thus emphasize the necessity of well exploiting the memory inherent in ISAC systems.
AB - In this paper, we consider a point-to-point integrated sensing and communication (ISAC) system, where a transmitter conveys a message to a receiver over a channel with memory and simultaneously estimates the state of the channel through the backscattered signals from the emitted waveform. Using Massey's concept of directed information for channels with memory, we formulate the capacity-distortion tradeoff for the ISAC problem when sensing is performed in an online fashion. Optimizing the transmit waveform for this system to simultaneously achieve good communication and sensing performance is a complicated task, and thus we propose a deep reinforcement learning (RL) approach to find a solution. The proposed approach enables the agent to optimize the ISAC performance by learning a reward that reflects the difference between the communication gain and the sensing loss. Since the state-space in our RL model is a priori unbounded, we employ deep deterministic policy gradient algorithm (DDPG). Our numerical results suggest a significant performance improvement when one considers unbounded state-space as opposed to a simpler RL problem with reduced state-space. In the extreme case of degenerate state-space only memoryless signaling strategies are possible. Our results thus emphasize the necessity of well exploiting the memory inherent in ISAC systems.
UR - https://www.scopus.com/pages/publications/105002677992
UR - https://www.scopus.com/inward/citedby.url?scp=105002677992&partnerID=8YFLogxK
U2 - 10.1109/IEEECONF60004.2024.10942618
DO - 10.1109/IEEECONF60004.2024.10942618
M3 - Conference contribution
AN - SCOPUS:105002677992
T3 - Conference Record - Asilomar Conference on Signals, Systems and Computers
SP - 433
EP - 437
BT - Conference Record of the 58th Asilomar Conference on Signals, Systems and Computers, ACSSC 2024
A2 - Matthews, Michael B.
PB - IEEE Computer Society
T2 - 58th Asilomar Conference on Signals, Systems and Computers, ACSSC 2024
Y2 - 27 October 2024 through 30 October 2024
ER -