A Contraction Approach to Model-based Reinforcement Learning

Ting Han Fan, Peter J. Ramadge

Research output: Contribution to journalConference articlepeer-review

Abstract

Despite its experimental success, Model-based Reinforcement Learning still lacks a complete theoretical understanding. To this end, we analyze the error in the cumulative reward using a contraction approach. We consider both stochastic and deterministic state transitions for continuous (non-discrete) state and action spaces. This approach doesn't require strong assumptions and can recover the typical quadratic error to the horizon. We prove that branched rollouts can reduce this error and are essential for deterministic transitions to have a Bellman contraction. Our analysis of policy mismatch error also applies to Imitation Learning. In this case, we show that GAN-type learning has an advantage over Behavioral Cloning when its discriminator is well-trained.

Original languageEnglish (US)
Pages (from-to)325-333
Number of pages9
JournalProceedings of Machine Learning Research
Volume130
StatePublished - 2021
Externally publishedYes
Event24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021 - Virtual, Online, United States
Duration: Apr 13 2021Apr 15 2021

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'A Contraction Approach to Model-based Reinforcement Learning'. Together they form a unique fingerprint.

Cite this