From Reals to Logic and Back: Inventing Symbolic Vocabularies, Actions, and Models for Planning from Raw Data

📅 2024-02-19
🏛️ arXiv.org
📈 Citations: 9
Influential: 0
📄 PDF
🤖 AI Summary
Robots struggle to acquire transferable relational concepts from few unannotated, unsegmented demonstrations, resulting in poor zero-shot generalization to long-horizon, high-complexity novel tasks. Method: We propose an end-to-end “real-valued → logical → planning” closed-loop framework that integrates relational learning, unsupervised representation disentanglement, symbolic induction, and differentiable logical reasoning. It autonomously discovers relational symbolic vocabularies, action semantics, and PDDL-like domain models directly from raw continuous trajectories. Contribution: This is the first approach to generate high-level relational concepts fully autonomously—without manual abstraction. We uncover implicit action structures surpassing classical priors. With only a few demonstration trajectories, it constructs high-fidelity abstract models, significantly improving scalability and success rates for joint long-horizon task and motion planning in deterministic environments.

Technology Category

Application Category

📝 Abstract
Hand-crafted, logic-based state and action representations have been widely used to overcome the intractable computational complexity of long-horizon robot planning problems, including task and motion planning problems. However, creating such representations requires experts with strong intuitions and detailed knowledge about the robot and the tasks it may need to accomplish in a given setting. Removing this dependency on human intuition is a highly active research area. This paper presents the first approach for autonomously learning generalizable, logic-based relational representations for abstract states and actions starting from unannotated high-dimensional, real-valued robot trajectories. The learned representations constitute auto-invented PDDL-like domain models. Empirical results in deterministic settings show that powerful abstract representations can be learned from just a handful of robot trajectories; the learned relational representations include but go beyond classical, intuitive notions of high-level actions; and that the learned models allow planning algorithms to scale to tasks that were previously beyond the scope of planning without hand-crafted abstractions.
Problem

Research questions and friction points this paper is trying to address.

Enabling robots to generalize from limited demonstrations to complex unseen tasks
Autonomously inventing relational concepts from unannotated robot demonstrations
Grounding learned symbolic concepts into logic-based world models for zero-shot generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autonomous invention of relational concepts
Logic-based world models grounding
Zero-shot generalization to complex tasks
🔎 Similar Papers
No similar papers found.