Distributed cooperative decision making in multi-agent multi-armed bandits

Peter Landgren, Vaibhav Srivastava, Naomi Ehrich Leonard

Research output: Contribution to journalArticlepeer-review

40 Scopus citations

Abstract

We study a distributed decision-making problem in which multiple agents face the same multi-armed bandit (MAB), and each agent makes sequential choices among arms to maximize its own individual reward. The agents cooperate by sharing their estimates over a fixed communication graph. We consider an unconstrained reward model in which two or more agents can choose the same arm and collect independent rewards. And we consider a constrained reward model in which agents that choose the same arm at the same time receive no reward. We design a dynamic, consensus-based, distributed estimation algorithm for cooperative estimation of mean rewards at each arm. We leverage the estimates from this algorithm to develop two distributed algorithms: coop-UCB2 and coop-UCB2-selective-learning, for the unconstrained and constrained reward models, respectively. We show that both algorithms achieve group performance close to the performance of a centralized fusion center. Further, we investigate the influence of the communication graph structure on performance. We propose a novel graph explore–exploit index that predicts the relative performance of groups in terms of the communication graph, and we propose a novel nodal explore–exploit centrality index that predicts the relative performance of agents in terms of the agent locations in the communication graph.

Original languageEnglish (US)
Article number109445
JournalAutomatica
Volume125
DOIs
StatePublished - Mar 2021
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Keywords

  • Distributed decision making
  • Explore–exploit dilemma
  • Multi-agent systems
  • Multi-armed bandits

Fingerprint

Dive into the research topics of 'Distributed cooperative decision making in multi-agent multi-armed bandits'. Together they form a unique fingerprint.

Cite this