ASYNCHRONOUS FEDERATED REINFORCEMENT LEARNING WITH POLICY GRADIENT UPDATES: ALGORITHM DESIGN AND CONVERGENCE ANALYSIS

Guangchen Lan, Dong Jun Han, Abolfazl Hashemi, Vaneet Aggarwal, Christopher G. Brinton

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

To improve the efficiency of reinforcement learning (RL), we propose a novel asynchronous federated reinforcement learning (FedRL) framework termed AFedPG, which constructs a global model through collaboration among N agents using policy gradient (PG) updates. To address the challenge of lagged policies in asynchronous settings, we design a delay-adaptive lookahead technique specifically for FedRL that can effectively handle heterogeneous arrival times of policy gradients. We analyze the theoretical global convergence bound of AFedPG, and characterize the advantage of the proposed algorithm in terms of both the sample complexity and time complexity. Specifically, our AFedPG method achieves O(ϵ-2.5/N) sample complexity for global convergence at each agent on average. Compared to the single agent setting with O(ϵ-2.5) sample complexity, it enjoys a linear speedup with respect to the number of agents. Moreover, compared to synchronous FedPG, AFedPG improves the time complexity from O(tmax/N ) to O(∑Ni=1 1/ti)-1, where ti denotes the time consumption in each iteration at agent i, and tmax is the largest one. The latter complexity O(∑Ni=1 1/ti)-1 is always smaller than the former one, and this improvement becomes significant in large-scale federated settings with heterogeneous computing powers (tmax ≫ tmin). Finally, we empirically verify the improved performance of AFedPG in four widely used MuJoCo environments with varying numbers of agents. We also demonstrate the advantages of AFedPG in various computing heterogeneity scenarios.

Original languageEnglish (US)
Title of host publication13th International Conference on Learning Representations, ICLR 2025
PublisherInternational Conference on Learning Representations, ICLR
Pages9444-9474
Number of pages31
ISBN (Electronic)9798331320850
StatePublished - 2025
Event13th International Conference on Learning Representations, ICLR 2025 - Singapore, Singapore
Duration: Apr 24 2025Apr 28 2025

Publication series

Name13th International Conference on Learning Representations, ICLR 2025

Conference

Conference13th International Conference on Learning Representations, ICLR 2025
Country/TerritorySingapore
CitySingapore
Period4/24/254/28/25

All Science Journal Classification (ASJC) codes

  • Language and Linguistics
  • Computer Science Applications
  • Education
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'ASYNCHRONOUS FEDERATED REINFORCEMENT LEARNING WITH POLICY GRADIENT UPDATES: ALGORITHM DESIGN AND CONVERGENCE ANALYSIS'. Together they form a unique fingerprint.

Cite this