TY - GEN
T1 - ASYNCHRONOUS FEDERATED REINFORCEMENT LEARNING WITH POLICY GRADIENT UPDATES
T2 - 13th International Conference on Learning Representations, ICLR 2025
AU - Lan, Guangchen
AU - Han, Dong Jun
AU - Hashemi, Abolfazl
AU - Aggarwal, Vaneet
AU - Brinton, Christopher G.
N1 - Publisher Copyright:
© 2025 13th International Conference on Learning Representations, ICLR 2025. All rights reserved.
PY - 2025
Y1 - 2025
N2 - To improve the efficiency of reinforcement learning (RL), we propose a novel asynchronous federated reinforcement learning (FedRL) framework termed AFedPG, which constructs a global model through collaboration among N agents using policy gradient (PG) updates. To address the challenge of lagged policies in asynchronous settings, we design a delay-adaptive lookahead technique specifically for FedRL that can effectively handle heterogeneous arrival times of policy gradients. We analyze the theoretical global convergence bound of AFedPG, and characterize the advantage of the proposed algorithm in terms of both the sample complexity and time complexity. Specifically, our AFedPG method achieves O(ϵ-2.5/N) sample complexity for global convergence at each agent on average. Compared to the single agent setting with O(ϵ-2.5) sample complexity, it enjoys a linear speedup with respect to the number of agents. Moreover, compared to synchronous FedPG, AFedPG improves the time complexity from O(tmax/N ) to O(∑Ni=1 1/ti)-1, where ti denotes the time consumption in each iteration at agent i, and tmax is the largest one. The latter complexity O(∑Ni=1 1/ti)-1 is always smaller than the former one, and this improvement becomes significant in large-scale federated settings with heterogeneous computing powers (tmax ≫ tmin). Finally, we empirically verify the improved performance of AFedPG in four widely used MuJoCo environments with varying numbers of agents. We also demonstrate the advantages of AFedPG in various computing heterogeneity scenarios.
AB - To improve the efficiency of reinforcement learning (RL), we propose a novel asynchronous federated reinforcement learning (FedRL) framework termed AFedPG, which constructs a global model through collaboration among N agents using policy gradient (PG) updates. To address the challenge of lagged policies in asynchronous settings, we design a delay-adaptive lookahead technique specifically for FedRL that can effectively handle heterogeneous arrival times of policy gradients. We analyze the theoretical global convergence bound of AFedPG, and characterize the advantage of the proposed algorithm in terms of both the sample complexity and time complexity. Specifically, our AFedPG method achieves O(ϵ-2.5/N) sample complexity for global convergence at each agent on average. Compared to the single agent setting with O(ϵ-2.5) sample complexity, it enjoys a linear speedup with respect to the number of agents. Moreover, compared to synchronous FedPG, AFedPG improves the time complexity from O(tmax/N ) to O(∑Ni=1 1/ti)-1, where ti denotes the time consumption in each iteration at agent i, and tmax is the largest one. The latter complexity O(∑Ni=1 1/ti)-1 is always smaller than the former one, and this improvement becomes significant in large-scale federated settings with heterogeneous computing powers (tmax ≫ tmin). Finally, we empirically verify the improved performance of AFedPG in four widely used MuJoCo environments with varying numbers of agents. We also demonstrate the advantages of AFedPG in various computing heterogeneity scenarios.
UR - https://www.scopus.com/pages/publications/105010207593
UR - https://www.scopus.com/inward/citedby.url?scp=105010207593&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:105010207593
T3 - 13th International Conference on Learning Representations, ICLR 2025
SP - 9444
EP - 9474
BT - 13th International Conference on Learning Representations, ICLR 2025
PB - International Conference on Learning Representations, ICLR
Y2 - 24 April 2025 through 28 April 2025
ER -