Provably Efficient Multi-Agent Reinforcement Learning with Fully Decentralized Communication

Justin Lidard, Udari Madhushani, Naomi Ehrich Leonard

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

A challenge in reinforcement learning (RL) is minimizing the cost of sampling associated with exploration. Distributed exploration reduces sampling complexity in multi-agent RL (MARL). We investigate the benefits to performance in MARL when exploration is fully decentralized. Specifically, we consider a class of online, episodic, tabular Q-learning problems under time-varying reward and transition dynamics, in which agents can communicate in a decentralized manner. We show that group performance, as measured by the bound on regret, can be significantly improved through communication when each agent uses a decentralized message-passing protocol, even when limited to sending information up to its γ-hop neighbors. We prove regret and sample complexity bounds that depend on the number of agents, communication network structure and γ. We show that incorporating more agents and more information sharing into the group learning scheme speeds up convergence to the optimal policy. Numerical simulations illustrate our results and validate our theoretical claims.

Original languageEnglish (US)
Title of host publication2022 American Control Conference, ACC 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3311-3316
Number of pages6
ISBN (Electronic)9781665451963
DOIs
StatePublished - 2022
Event2022 American Control Conference, ACC 2022 - Atlanta, United States
Duration: Jun 8 2022Jun 10 2022

Publication series

NameProceedings of the American Control Conference
Volume2022-June
ISSN (Print)0743-1619

Conference

Conference2022 American Control Conference, ACC 2022
Country/TerritoryUnited States
CityAtlanta
Period6/8/226/10/22

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Provably Efficient Multi-Agent Reinforcement Learning with Fully Decentralized Communication'. Together they form a unique fingerprint.

Cite this