Bandit problems with side observations

Research output: Contribution to journalConference articlepeer-review

Abstract

An extension of the traditional two-armed bandit problem is considered, in which the decision maker has access to some side information before deciding which arm to pull. At each time t, before making a selection, the decision maker is able to observe a random variable, Xt, that provides some information on the rewards to be obtained. The focus is on finding uniformly good rules (that minimize the growth rate of the regret) and on quantifying how much the additional information helps. Various settings are considered and asymptotically tight lower bounds on the achievable regret are provided.

Original languageEnglish (US)
Pages (from-to)3988-3993
Number of pages6
JournalProceedings of the IEEE Conference on Decision and Control
Volume4
StatePublished - 2002
Event41st IEEE Conference on Decision and Control - Las Vegas, NV, United States
Duration: Dec 10 2002Dec 13 2002

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Modeling and Simulation
  • Control and Optimization

Keywords

  • Adaptive
  • Allocation rule
  • Asymptotic
  • Efficient
  • Regret
  • Side information
  • Two-armed bandit

Fingerprint

Dive into the research topics of 'Bandit problems with side observations'. Together they form a unique fingerprint.

Cite this