Instance-dependent Regret Bounds for Dueling Bandits

Akshay Balsubramani, Zohar Karnin, Robert E. Schapire, Masrour Zoghi

Research output: Contribution to journalConference articlepeer-review

5 Scopus citations


We study the multi-armed dueling bandit problem in which feedback is provided in the form of relative comparisons between pairs of actions, with the goal of eventually learning to select actions that are close to the best. Following Dudík et al. (2015), we aim for algorithms whose performance approaches that of the optimal randomized choice of actions, the von Neumann winner, expressly avoiding more restrictive assumptions, for instance, regarding the existence of a single best action (a Condorcet winner). In this general setting, the best known algorithms achieve regret Õ(√KT) in T rounds with K actions. In this paper, we present the first instance-dependent regret bounds for the general problem, focusing particularly on when the von Neumann winner is sparse. Specifically, we propose a new _algorithm whose regret, relative to a unique von Neumann winner with sparsity s, is at most Õ(√sT), plus an instance-dependent constant. Thus, when the sparsity is much smaller than the total number of actions, our result indicates that learning can be substantially faster.

Original languageEnglish (US)
Pages (from-to)336-360
Number of pages25
JournalJournal of Machine Learning Research
Issue numberJune
StatePublished - Jun 6 2016
Externally publishedYes
Event29th Conference on Learning Theory, COLT 2016 - New York, United States
Duration: Jun 23 2016Jun 26 2016

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence


  • Dueling bandits
  • Game theory
  • Online learning


Dive into the research topics of 'Instance-dependent Regret Bounds for Dueling Bandits'. Together they form a unique fingerprint.

Cite this