GLIB: Efficient Exploration for Relational Model-Based Reinforcement Learning via Goal-Literal Babbling

  • Rohan Chitnis
  • , Tom Silver
  • , Joshua Tenenbaum
  • , Leslie Pack Kaelbling
  • , Tómas Lozano-Pérez

Research output: Chapter in Book/Report/Conference proceedingConference contribution

16 Scopus citations

Abstract

We address the problem of efficient exploration for transition model learning in the relational model-based reinforcement learning setting without extrinsic goals or rewards. Inspired by human curiosity, we propose goal-literal babbling (GLIB), a simple and general method for exploration in such problems. GLIB samples relational conjunctive goals that can be understood as specific, targeted effects that the agent would like to achieve in the world, and plans to achieve these goals using the transition model being learned. We provide theoretical guarantees showing that exploration with GLIB will converge almost surely to the ground truth model. Experimentally, we find GLIB to strongly outperform existing methods in both prediction and planning on a range of tasks, encompassing standard PDDL and PPDDL planning benchmarks and a robotic manipulation task implemented in the PyBullet physics simulator. Video: https://youtu.be/F6lmrPT6TOY Code: https://git.io/JIsTB.

Original languageEnglish (US)
Title of host publication35th AAAI Conference on Artificial Intelligence, AAAI 2021
PublisherAssociation for the Advancement of Artificial Intelligence
Pages11782-11791
Number of pages10
ISBN (Electronic)9781713835974
DOIs
StatePublished - 2021
Externally publishedYes
Event35th AAAI Conference on Artificial Intelligence, AAAI 2021 - Virtual, Online
Duration: Feb 2 2021Feb 9 2021

Publication series

Name35th AAAI Conference on Artificial Intelligence, AAAI 2021
Volume13B

Conference

Conference35th AAAI Conference on Artificial Intelligence, AAAI 2021
CityVirtual, Online
Period2/2/212/9/21

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'GLIB: Efficient Exploration for Relational Model-Based Reinforcement Learning via Goal-Literal Babbling'. Together they form a unique fingerprint.

Cite this