Learning Adversarial Markov Decision Processes with Bandit Feedback and Unknown Transition

Chi Jin, Tiancheng Jin, Haipeng Luo, Suvrit Sra, Tiancheng Yu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

We consider the task of learning in episodic finitehorizon Markov decision processes with an unknown transition function, bandit feedback, and adversarial losses. We propose an efficient algorithm that achieves O(LjXj p jAjT) regret with high probability, where L is the horizon, jXj the number of states, jAj the number of actions, and T the number of episodes. To our knowledge, our algorithm is the first to ensure O( p T) regret in this challenging setting; in fact it achieves the same regret as (Rosenberg and Mansour, 2019a) who consider the easier setting with full-information. Our key contributions are two-fold: A tighter confidence set for the transition function; and an optimistic loss estimator that is inversely weighted by an upper occupancy bound.

Original languageEnglish (US)
Title of host publication37th International Conference on Machine Learning, ICML 2020
EditorsHal Daume, Aarti Singh
PublisherInternational Machine Learning Society (IMLS)
Pages4810-4819
Number of pages10
ISBN (Electronic)9781713821120
StatePublished - 2020
Event37th International Conference on Machine Learning, ICML 2020 - Virtual, Online
Duration: Jul 13 2020Jul 18 2020

Publication series

Name37th International Conference on Machine Learning, ICML 2020
VolumePartF168147-7

Conference

Conference37th International Conference on Machine Learning, ICML 2020
CityVirtual, Online
Period7/13/207/18/20

All Science Journal Classification (ASJC) codes

  • Computational Theory and Mathematics
  • Human-Computer Interaction
  • Software

Fingerprint

Dive into the research topics of 'Learning Adversarial Markov Decision Processes with Bandit Feedback and Unknown Transition'. Together they form a unique fingerprint.

Cite this