TY - JOUR
T1 - Distributed Multi-Agent Meta Learning for Trajectory Design in Wireless Drone Networks
AU - Hu, Ye
AU - Chen, Mingzhe
AU - Saad, Walid
AU - Poor, H. Vincent
AU - Cui, Shuguang
N1 - Funding Information:
Manuscript received October 20, 2020; revised February 21, 2021; accepted April 11, 2021. Date of publication June 16, 2021; date of current version September 16, 2021. This work was supported in part by the National Key Research and Development Program of China under Grant 2018YFB1800800l, the U.S. National Science Foundation (NSF) under Grant CNS-1909372, Grant CNS-1836802, and Grant CCF-1908308; in part by the Key Area Research and Development Program of Guangdong Province under Grant 2018B030338001; the Shenzhen Outstanding Talents Training Fund; and the Guangdong Research Project under Grant 2017ZT07 × 152. This article was presented at the IEEE Global Communications Conference [1]. (Corresponding author: Ye Hu.) Ye Hu is with the Wireless@VT, The Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA 24061 USA (e-mail: yeh17@vt.edu).
Publisher Copyright:
© 1983-2012 IEEE.
PY - 2021/10
Y1 - 2021/10
N2 - In this paper, the problem of the trajectory design for a group of energy-constrained drones operating in dynamic wireless network environments is studied. In the considered model, a team of drone base stations (DBSs) is dispatched to cooperatively serve clusters of ground users that have dynamic and unpredictable uplink access demands. In this scenario, the DBSs must cooperatively navigate in the considered area to maximize coverage of the dynamic requests of the ground users. This trajectory design problem is posed as an optimization framework whose goal is to find optimal trajectories that maximize the fraction of users served by all DBSs. To find an optimal solution for this non-convex optimization problem under unpredictable environments, a value decomposition based reinforcement learning (VD-RL) solution coupled with a meta-training mechanism is proposed. This algorithm allows the DBSs to dynamically learn their trajectories while generalizing their learning to unseen environments. Analytical results show that, the proposed VD-RL algorithm is guaranteed to converge to a locally optimal solution of the non-convex optimization problem. Simulation results show that, even without meta-training, the proposed VD-RL algorithm can achieve a 53.2% improvement of the service coverage and a 30.6% improvement in terms of the convergence speed, compared to baseline multi-agent algorithms. Meanwhile, the use of the meta-training mechanism improves the convergence speed of the VD-RL algorithm by up to 53.8% when the DBSs must deal with a previously unseen task.
AB - In this paper, the problem of the trajectory design for a group of energy-constrained drones operating in dynamic wireless network environments is studied. In the considered model, a team of drone base stations (DBSs) is dispatched to cooperatively serve clusters of ground users that have dynamic and unpredictable uplink access demands. In this scenario, the DBSs must cooperatively navigate in the considered area to maximize coverage of the dynamic requests of the ground users. This trajectory design problem is posed as an optimization framework whose goal is to find optimal trajectories that maximize the fraction of users served by all DBSs. To find an optimal solution for this non-convex optimization problem under unpredictable environments, a value decomposition based reinforcement learning (VD-RL) solution coupled with a meta-training mechanism is proposed. This algorithm allows the DBSs to dynamically learn their trajectories while generalizing their learning to unseen environments. Analytical results show that, the proposed VD-RL algorithm is guaranteed to converge to a locally optimal solution of the non-convex optimization problem. Simulation results show that, even without meta-training, the proposed VD-RL algorithm can achieve a 53.2% improvement of the service coverage and a 30.6% improvement in terms of the convergence speed, compared to baseline multi-agent algorithms. Meanwhile, the use of the meta-training mechanism improves the convergence speed of the VD-RL algorithm by up to 53.8% when the DBSs must deal with a previously unseen task.
KW - Drones
KW - meta-learning
KW - multi-agent reinforcement learning
KW - network optimization
UR - http://www.scopus.com/inward/record.url?scp=85112168484&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85112168484&partnerID=8YFLogxK
U2 - 10.1109/JSAC.2021.3088689
DO - 10.1109/JSAC.2021.3088689
M3 - Article
AN - SCOPUS:85112168484
SN - 0733-8716
VL - 39
SP - 3177
EP - 3192
JO - IEEE Journal on Selected Areas in Communications
JF - IEEE Journal on Selected Areas in Communications
IS - 10
ER -