From Pixels to Predicates: Learning Symbolic World Models via Pretrained VLMs

  • Ashay Athalye
  • , Nishanth Kumar
  • , Tom Silver
  • , Yichao Liang
  • , Jiuguang Wang
  • , Tomas Lozano-Perez
  • , Leslie Pack Kaelbling

Research output: Contribution to journalArticlepeer-review

Abstract

Our aim is to learn to solve long-horizon decision-making problems in complex robotics domains given low-level skills and a handful of demonstrations containing sequences of images. To this end, we focus on learning abstract symbolic world models that facilitate zero-shot generalization to novel goals via planning. A critical component of such models is the set of symbolic predicates that define properties of and relationships between objects. In this work, we leverage pretrained vision-language models (VLMs) to propose a large set of visual predicates potentially relevant for decision-making, and to evaluate those predicates directly from camera images. At training time, we pass the proposed predicates and demonstrations into an optimization-based model-learning algorithm to obtain an abstract symbolic world model that is defined in terms of a compact subset of the proposed predicates. At test time, given a novel goal in a novel setting, we use the VLM to construct a symbolic description of the current world state, and then use a search-based planning algorithm to find a sequence of low-level skills that achieves the goal. We demonstrate empirically across experiments in both simulation and the real world that our method can generalize aggressively, applying its learned world model to solve problems with varying visual backgrounds, types, numbers, and arrangements of objects, as well as novel goals and much longer horizons than those seen at training time.

Original languageEnglish (US)
Pages (from-to)4002-4009
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume11
Issue number4
DOIs
StatePublished - 2026

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Biomedical Engineering
  • Human-Computer Interaction
  • Mechanical Engineering
  • Computer Vision and Pattern Recognition
  • Computer Science Applications
  • Control and Optimization
  • Artificial Intelligence

Keywords

  • Symbolic world models
  • robot learning
  • task and motion planning
  • vision-language models

Fingerprint

Dive into the research topics of 'From Pixels to Predicates: Learning Symbolic World Models via Pretrained VLMs'. Together they form a unique fingerprint.

Cite this