Multi-armed Bandit Problems with Strategic Arms

Mark Braverman, Jieming Mao, Jon Schneider, S. Matthew Weinberg

Research output: Contribution to journalConference articlepeer-review

10 Scopus citations


We study a strategic version of the multi-armed bandit problem, where each arm is an individual strategic agent and we, the principal, pull one arm each round. When pulled, the arm receives some private reward va and can choose an amount xa to pass on to the principal (keeping va - xa for itself). All non-pulled arms get reward 0. Each strategic arm tries to maximize its own utility over the course of T rounds. Our goal is to design an algorithm for the principal incentivizing these arms to pass on as much of their private rewards as possible. When private rewards are stochastically drawn each round (vat ← Da), we show that: • Algorithms that perform well in the classic adversarial multi-armed bandit setting necessarily perform poorly: For all algorithms that guarantee low regret in an adversarial setting, there exist distributions D1, . . ., Dk and an o(T)-approximate Nash equilibrium for the arms where the principal receives reward o(T). • There exists an algorithm for the principal that induces a game among the arms where each arm has a dominant strategy. Moreover, for every o(T)-approximate Nash equilibrium, the principal receives expected reward µ0T - o(T), where µ0 is the second-largest of the means E[Da]. This algorithm maintains its guarantee if the arms are non-strategic (xa = va), and also if there is a mix of strategic and non-strategic arms.

Original languageEnglish (US)
Pages (from-to)383-416
Number of pages34
JournalProceedings of Machine Learning Research
StatePublished - 2019
Externally publishedYes
Event32nd Conference on Learning Theory, COLT 2019 - Phoenix, United States
Duration: Jun 25 2019Jun 28 2019

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability


  • auction design
  • multi-armed bandit
  • strategic learning


Dive into the research topics of 'Multi-armed Bandit Problems with Strategic Arms'. Together they form a unique fingerprint.

Cite this