Invariant Policy Optimization: Towards Stronger Generalization in Reinforcement Learning

Anoopkumar Sonar, Vincent Pacelli, Anirudha Majumdar

Research output: Contribution to journalConference articlepeer-review

19 Scopus citations

Abstract

A fundamental challenge in reinforcement learning is to learn policies that generalize beyond the operating domains experienced during training. In this paper, we approach this challenge through the following invariance principle: an agent must find a representation such that there exists an action-predictor built on top of this representation that is simultaneously optimal across all training domains. Intuitively, the resulting invariant policy enhances generalization by finding causes of successful actions. We propose a novel learning algorithm, Invariant Policy Optimization (IPO), that implements this principle and learns an invariant policy during training. We compare our approach with standard policy gradient methods and demonstrate significant improvements in generalization performance on unseen domains for linear quadratic regulator and grid-world problems, and an example where a robot must learn to open doors with varying physical properties.

Original languageEnglish (US)
Pages (from-to)21-33
Number of pages13
JournalProceedings of Machine Learning Research
Volume144
StatePublished - 2021
Event3rd Annual Conference on Learning for Dynamics and Control, L4DC 2021 - Virtual, Online, Switzerland
Duration: Jun 7 2021Jun 8 2021

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Keywords

  • Causality
  • Generalization
  • Invariance
  • Reinforcement Learning

Fingerprint

Dive into the research topics of 'Invariant Policy Optimization: Towards Stronger Generalization in Reinforcement Learning'. Together they form a unique fingerprint.

Cite this