TY - GEN
T1 - Reinforcement Learning for Minimizing Age of Information under Realistic Physical Dynamics
AU - Wang, Sihua
AU - Chen, Mingzhe
AU - Saad, Walid
AU - Yin, Changchuan
AU - Cui, Shuguang
AU - Poor, H. Vincent
N1 - Funding Information:
The work was supported in part by Beijing Natural Science Foundation and Municipal Education Committee Joint Funding Project under Grant KZ201911232046, the National Natural Science Foundation of China under Grants 61671086 and 61871041, by Beijing Laboratory Funding under Grant 2019BJLAB01, by the 111 Project under Grant B17007, by the Key Area R&D Program of Guangdong Province with grant No. 2018B030338001, by Natural Science Foundation of China with grant NSFC-61629101, by the U.S. National Science Foundation under Grants CNS-1739642, CCF-0939370, and CCF-1908308, and in part by BUPT Excellent Ph.D. Students Foundation (CX2020307).
Publisher Copyright:
© 2020 IEEE.
PY - 2020/12
Y1 - 2020/12
N2 - In this paper, the problem of minimizing the weighted sum of age of information (AoI) and total energy consumption of Internet of Things (IoT) devices is studied. In particular, each IoT device monitors a physical process that follows nonlinear dynamics. As the dynamic of the physical process varies over time, each device must sample the real-time status of the physical system and send the status information to a base station (BS) so as to monitor the physical process. The dynamics of the realistic physical process will influence the sampling frequency and status update scheme of each device. In particular, as the physical process varies rapidly, the sampling frequency of each device must be increased to capture these physical dynamics. Meanwhile, changes in the sampling frequency will also impact the energy usage of the device. Thus, it is necessary to determine a subset of devices to sample the physical process at each time slot so as to accurately monitor the dynamics of the physical process using minimum energy. This problem is formulated as an optimization problem whose goal is to minimize the weighted sum of AoI and total device energy consumption. To solve this problem, a machine learning framework based on the repeated update Q-learning (RUQL) algorithm is proposed. The proposed method enables the BS to overcome the biased action selection problem (e.g., an agent always takes a subset of actions while ignoring other actions), and hence, dynamically and quickly finding a device sampling and status update policy so as to minimize the sum of AoI and energy consumption of all devices. Simulations with real data of PM 2.5 pollution in Beijing from the Center for Statistical Science at Peking University show that the proposed algorithm can reduce the sum of AoI by up to 26.9% compared to the conventional Q-learning method.
AB - In this paper, the problem of minimizing the weighted sum of age of information (AoI) and total energy consumption of Internet of Things (IoT) devices is studied. In particular, each IoT device monitors a physical process that follows nonlinear dynamics. As the dynamic of the physical process varies over time, each device must sample the real-time status of the physical system and send the status information to a base station (BS) so as to monitor the physical process. The dynamics of the realistic physical process will influence the sampling frequency and status update scheme of each device. In particular, as the physical process varies rapidly, the sampling frequency of each device must be increased to capture these physical dynamics. Meanwhile, changes in the sampling frequency will also impact the energy usage of the device. Thus, it is necessary to determine a subset of devices to sample the physical process at each time slot so as to accurately monitor the dynamics of the physical process using minimum energy. This problem is formulated as an optimization problem whose goal is to minimize the weighted sum of AoI and total device energy consumption. To solve this problem, a machine learning framework based on the repeated update Q-learning (RUQL) algorithm is proposed. The proposed method enables the BS to overcome the biased action selection problem (e.g., an agent always takes a subset of actions while ignoring other actions), and hence, dynamically and quickly finding a device sampling and status update policy so as to minimize the sum of AoI and energy consumption of all devices. Simulations with real data of PM 2.5 pollution in Beijing from the Center for Statistical Science at Peking University show that the proposed algorithm can reduce the sum of AoI by up to 26.9% compared to the conventional Q-learning method.
UR - http://www.scopus.com/inward/record.url?scp=85100373488&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85100373488&partnerID=8YFLogxK
U2 - 10.1109/GLOBECOM42002.2020.9322139
DO - 10.1109/GLOBECOM42002.2020.9322139
M3 - Conference contribution
AN - SCOPUS:85100373488
T3 - 2020 IEEE Global Communications Conference, GLOBECOM 2020 - Proceedings
BT - 2020 IEEE Global Communications Conference, GLOBECOM 2020 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE Global Communications Conference, GLOBECOM 2020
Y2 - 7 December 2020 through 11 December 2020
ER -