f-IRL: Inverse Reinforcement Learning via State Marginal Matching

Tianwei Ni, Harshit Sikchi, Yufei Wang, Tejus Gupta, Lisa Lee, Benjamin Eysenbach

Research output: Contribution to journalConference articlepeer-review

11 Scopus citations


Imitation learning is well-suited for robotic tasks where it is difficult to directly program the behavior or specify a cost for optimal control. In this work, we propose a method for learning the reward function (and the corresponding policy) to match the expert state density. Our main result is the analytic gradient of any f-divergence between the agent and expert state distribution w.r.t. reward parameters. Based on the derived gradient, we present an algorithm, f-IRL, that recovers a stationary reward function from the expert density by gradient descent. We show that f-IRL can learn behaviors from a hand-designed target state density or implicitly through expert observations. Our method outperforms adversarial imitation learning methods in terms of sample efficiency and the required number of expert trajectories on IRL benchmarks. Moreover, we show that the recovered reward function can be used to quickly solve downstream tasks, and empirically demonstrate its utility on hard-to-explore tasks and for behavior transfer across changes in dynamics.

Original languageEnglish (US)
Pages (from-to)529-551
Number of pages23
JournalProceedings of Machine Learning Research
StatePublished - 2020
Externally publishedYes
Event4th Conference on Robot Learning, CoRL 2020 - Virtual, Online, United States
Duration: Nov 16 2020Nov 18 2020

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability


  • Imitation Learning
  • Inverse Reinforcement Learning


Dive into the research topics of 'f-IRL: Inverse Reinforcement Learning via State Marginal Matching'. Together they form a unique fingerprint.

Cite this