Generalization Guarantees for Imitation Learning

Allen Z. Ren, Sushant Veer, Anirudha Majumdar

Research output: Contribution to journalConference articlepeer-review

14 Scopus citations

Abstract

Control policies from imitation learning can often fail to generalize to novel environments due to imperfect demonstrations or the inability of imitation learning algorithms to accurately infer the expert's policies. In this paper, we present rigorous generalization guarantees for imitation learning by leveraging the Probably Approximately Correct (PAC)-Bayes framework to provide upper bounds on the expected cost of policies in novel environments. We propose a two-stage training method where a latent policy distribution is first embedded with multi-modal expert behavior using a conditional variational autoencoder, and then “fine-tuned” in new training environments to explicitly optimize the generalization bound. We demonstrate strong generalization bounds and their tightness relative to empirical performance in simulation for (i) grasping diverse mugs, (ii) planar pushing with visual feedback, and (iii) vision-based indoor navigation, as well as through hardware experiments for the two manipulation tasks.

Original languageEnglish (US)
Pages (from-to)1426-1442
Number of pages17
JournalProceedings of Machine Learning Research
Volume155
StatePublished - 2020
Event4th Conference on Robot Learning, CoRL 2020 - Virtual, Online, United States
Duration: Nov 16 2020Nov 18 2020

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Keywords

  • Generalization
  • imitation learning
  • indoor navigation
  • manipulation

Fingerprint

Dive into the research topics of 'Generalization Guarantees for Imitation Learning'. Together they form a unique fingerprint.

Cite this