Contextual bandit learning with predictable rewards

Alekh Agarwal, Miroslav Dudík, Satyen Kale, John Langford, Robert E. Schapire

Research output: Contribution to journalConference articlepeer-review

34 Scopus citations

Abstract

Contextual bandit learning is a reinforcement learning problem where the learner repeatedly receives a set of features (context), takes an action and receives a reward based on the action and context. We consider this problem under a realizability assumption: there exists a function in a (known) function class, always capable of predicting the expected reward, given the action and context. Under this assumption, we show three things. We present a new algorithm-Regressor Elimination-with a regret similar to the agnostic setting (i.e. in the absence of realizability assumption). We prove a new lower bound showing no algorithm can achieve superior performance in the worst case even with the realizability assumption. However, we do show that for any set of policies (mapping contexts to actions), there is a distribution over rewards (given context) such that our new algorithm has constant regret unlike the previous approaches.

Original languageEnglish (US)
Pages (from-to)19-26
Number of pages8
JournalJournal of Machine Learning Research
Volume22
StatePublished - 2012
Event15th International Conference on Artificial Intelligence and Statistics, AISTATS 2012 - La Palma, Spain
Duration: Apr 21 2012Apr 23 2012

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Contextual bandit learning with predictable rewards'. Together they form a unique fingerprint.

Cite this