TY - JOUR
T1 - Distributed Reinforcement Learning for Age of Information Minimization in Real-Time IoT Systems
AU - Wang, Sihua
AU - Chen, Mingzhe
AU - Yang, Zhaohui
AU - Yin, Changchuan
AU - Saad, Walid
AU - Cui, Shuguang
AU - Poor, H. Vincent
N1 - Publisher Copyright:
© 2007-2012 IEEE.
PY - 2022/4/1
Y1 - 2022/4/1
N2 - In this paper, the problem of minimizing the weighted sum of age of information (AoI) and total energy consumption of Internet of Things (IoT) devices is studied. In the considered model, each IoT device monitors a physical process that follows nonlinear dynamics. As the dynamics of the physical process vary over time, each device should find an optimal sampling frequency to sample the real-time dynamics of the physical system and send sampled information to a base station (BS). Due to limited wireless resources, the BS can only select a subset of devices to transmit their sampled information. Thus, edge devices can cooperatively sample their monitored dynamics based on the local observations and the BS will collect the sampled information from the devices immediately, hence avoiding the additional time and energy used for sampling and information transmission. To this end, it is necessary to jointly optimize the sampling policy of each device and the device selection scheme of the BS so as to accurately monitor the dynamics of the physical process using minimum energy. This problem is formulated as an optimization problem whose goal is to minimize the weighted sum of AoI cost and energy consumption. To solve this problem, we propose a novel distributed reinforcement learning (RL) approach for the sampling policy optimization. The proposed algorithm enables edge devices to cooperatively find the global optimal sampling policy using their own local observations. Given the sampling policy, the device selection scheme can be optimized thus minimizing the weighted sum of AoI and energy consumption of all devices. Simulations with real PM 2.5 pollution data show that the proposed algorithm can reduce the sum of AoI by up to 17.8% and 33.9%, respectively, and the total energy consumption by up to 13.2% and 35.1%, respectively, compared to a conventional deep Q network method and a uniform sampling policy.
AB - In this paper, the problem of minimizing the weighted sum of age of information (AoI) and total energy consumption of Internet of Things (IoT) devices is studied. In the considered model, each IoT device monitors a physical process that follows nonlinear dynamics. As the dynamics of the physical process vary over time, each device should find an optimal sampling frequency to sample the real-time dynamics of the physical system and send sampled information to a base station (BS). Due to limited wireless resources, the BS can only select a subset of devices to transmit their sampled information. Thus, edge devices can cooperatively sample their monitored dynamics based on the local observations and the BS will collect the sampled information from the devices immediately, hence avoiding the additional time and energy used for sampling and information transmission. To this end, it is necessary to jointly optimize the sampling policy of each device and the device selection scheme of the BS so as to accurately monitor the dynamics of the physical process using minimum energy. This problem is formulated as an optimization problem whose goal is to minimize the weighted sum of AoI cost and energy consumption. To solve this problem, we propose a novel distributed reinforcement learning (RL) approach for the sampling policy optimization. The proposed algorithm enables edge devices to cooperatively find the global optimal sampling policy using their own local observations. Given the sampling policy, the device selection scheme can be optimized thus minimizing the weighted sum of AoI and energy consumption of all devices. Simulations with real PM 2.5 pollution data show that the proposed algorithm can reduce the sum of AoI by up to 17.8% and 33.9%, respectively, and the total energy consumption by up to 13.2% and 35.1%, respectively, compared to a conventional deep Q network method and a uniform sampling policy.
KW - Physical process
KW - age of information
KW - distributed reinforcement learning
KW - sampling frequency
UR - http://www.scopus.com/inward/record.url?scp=85123760212&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85123760212&partnerID=8YFLogxK
U2 - 10.1109/JSTSP.2022.3144874
DO - 10.1109/JSTSP.2022.3144874
M3 - Article
AN - SCOPUS:85123760212
SN - 1932-4553
VL - 16
SP - 501
EP - 515
JO - IEEE Journal on Selected Topics in Signal Processing
JF - IEEE Journal on Selected Topics in Signal Processing
IS - 3
ER -