Nonstochastic multi-armed bandits with graph-structured feedback

Noga Alon, Nicolo Cesa-Bianchi, Claudio Gentile, Shie Mannor, Yishay Mansour, Ohad Shamir

Research output: Contribution to journalArticle

13 Scopus citations

Abstract

We introduce and study a partial-information model of online learning, where a decision maker repeatedly chooses from a finite set of actions and observes some subset of the associated losses. This setting naturally models several situations where knowing the loss of one action provides information on the loss of other actions. Moreover, it generalizes and interpolates between the well-studied full-information setting (where all losses are revealed) and the bandit setting (where only the loss of the action chosen by the player is revealed). We provide several algorithms addressing different variants of our setting and provide tight regret bounds depending on combinatorial properties of the information feedback structure.

Original languageEnglish (US)
Pages (from-to)1785-1826
Number of pages42
JournalSIAM Journal on Computing
Volume46
Issue number6
DOIs
StatePublished - Jan 1 2017
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Computer Science(all)
  • Mathematics(all)

Keywords

  • Graph theory
  • Learning from experts
  • Learning with partial feedback
  • Multi-armed bandits
  • Online learning

Fingerprint Dive into the research topics of 'Nonstochastic multi-armed bandits with graph-structured feedback'. Together they form a unique fingerprint.

  • Cite this

    Alon, N., Cesa-Bianchi, N., Gentile, C., Mannor, S., Mansour, Y., & Shamir, O. (2017). Nonstochastic multi-armed bandits with graph-structured feedback. SIAM Journal on Computing, 46(6), 1785-1826. https://doi.org/10.1137/140989455