TY - GEN
T1 - Provably Efficient Multi-Agent Reinforcement Learning with Fully Decentralized Communication
AU - Lidard, Justin
AU - Madhushani, Udari
AU - Leonard, Naomi Ehrich
N1 - Publisher Copyright:
© 2022 American Automatic Control Council.
PY - 2022
Y1 - 2022
N2 - A challenge in reinforcement learning (RL) is minimizing the cost of sampling associated with exploration. Distributed exploration reduces sampling complexity in multi-agent RL (MARL). We investigate the benefits to performance in MARL when exploration is fully decentralized. Specifically, we consider a class of online, episodic, tabular Q-learning problems under time-varying reward and transition dynamics, in which agents can communicate in a decentralized manner. We show that group performance, as measured by the bound on regret, can be significantly improved through communication when each agent uses a decentralized message-passing protocol, even when limited to sending information up to its γ-hop neighbors. We prove regret and sample complexity bounds that depend on the number of agents, communication network structure and γ. We show that incorporating more agents and more information sharing into the group learning scheme speeds up convergence to the optimal policy. Numerical simulations illustrate our results and validate our theoretical claims.
AB - A challenge in reinforcement learning (RL) is minimizing the cost of sampling associated with exploration. Distributed exploration reduces sampling complexity in multi-agent RL (MARL). We investigate the benefits to performance in MARL when exploration is fully decentralized. Specifically, we consider a class of online, episodic, tabular Q-learning problems under time-varying reward and transition dynamics, in which agents can communicate in a decentralized manner. We show that group performance, as measured by the bound on regret, can be significantly improved through communication when each agent uses a decentralized message-passing protocol, even when limited to sending information up to its γ-hop neighbors. We prove regret and sample complexity bounds that depend on the number of agents, communication network structure and γ. We show that incorporating more agents and more information sharing into the group learning scheme speeds up convergence to the optimal policy. Numerical simulations illustrate our results and validate our theoretical claims.
UR - http://www.scopus.com/inward/record.url?scp=85138496403&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85138496403&partnerID=8YFLogxK
U2 - 10.23919/ACC53348.2022.9867146
DO - 10.23919/ACC53348.2022.9867146
M3 - Conference contribution
AN - SCOPUS:85138496403
T3 - Proceedings of the American Control Conference
SP - 3311
EP - 3316
BT - 2022 American Control Conference, ACC 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 American Control Conference, ACC 2022
Y2 - 8 June 2022 through 10 June 2022
ER -