🤖 AI Summary
In long-horizon robotic decision-making, a critical gap exists between low-level visual perception and high-level symbolic planning, leading to poor generalization—especially under few-shot or zero-shot conditions. Method: This paper proposes an end-to-end visual predicate invention framework that leverages pretrained vision-language models (VLMs) to directly generate interpretable, generalizable semantic predicates from raw pixels—bypassing explicit object detection or centralized representations. It integrates VLM-driven predicate proposal and truth evaluation, extends the feature-predicate formalism, and introduces a novel pix2pred mechanism for pixel-level predicate generation, yielding a lightweight abstraction bridge model. Contribution/Results: Evaluated in two simulated robotic environments, the framework autonomously invents semantically coherent predicates and achieves significant improvements in zero-shot generalization to novel, complex long-horizon tasks—demonstrating robust cross-task transfer without task-specific supervision.
📝 Abstract
Our aim is to learn to solve long-horizon decision-making problems in highly-variable, combinatorially-complex robotics domains given raw sensor input in the form of images. Previous work has shown that one way to achieve this aim is to learn a structured abstract transition model in the form of symbolic predicates and operators, and then plan within this model to solve novel tasks at test time. However, these learned models do not ground directly into pixels from just a handful of demonstrations. In this work, we propose to invent predicates that operate directly over input images by leveraging the capabilities of pretrained vision-language models (VLMs). Our key idea is that, given a set of demonstrations, a VLM can be used to propose a set of predicates that are potentially relevant for decision-making and then to determine the truth values of these predicates in both the given demonstrations and new image inputs. We build upon an existing framework for predicate invention, which generates feature-based predicates operating on object-centric states, to also generate visual predicates that operate on images. Experimentally, we show that our approach -- pix2pred -- is able to invent semantically meaningful predicates that enable generalization to novel, complex, and long-horizon tasks across two simulated robotic environments.