Sketch-Plan-Generalize: Continual Few-Shot Learning of Inductively Generalizable Spatial Concepts

📅 2024-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Embodied agents struggle to learn spatial concepts (e.g., “staircase”) from a single demonstration and generalize them inductively, hierarchically compose them, and adapt to novel constraints in robotic collaboration—challenging both LLM-only and neural-only approaches due to weak generalization, and neural-symbolic methods due to inefficient demonstration-guided program search. Method: We propose a Sketch-Plan-Generalize neurosymbolic framework: (1) sketch-based spatial semantic extraction; (2) embodied-perception-guided program planning via Monte Carlo Tree Search; and (3) program abstraction and induction for modular concept reuse and zero-shot generalization to complex structures. The framework integrates LLM-based code generation with grounded visual representations. Results: Our method significantly outperforms LLM-only and neural-only baselines, achieving strong inductive generalization across scales and structural complexities. It further enables embodied instruction reasoning and execution grounded in learned spatial concepts.

Technology Category

Application Category

📝 Abstract
Our goal is to enable embodied agents to learn inductively generalizable spatial concepts, e.g., learning staircase as an inductive composition of towers of increasing height. Given a human demonstration, we seek a learning architecture that infers a succinct ${program}$ representation that explains the observed instance. Additionally, the approach should generalize inductively to novel structures of different sizes or complex structures expressed as a hierarchical composition of previously learned concepts. Existing approaches that use code generation capabilities of pre-trained large (visual) language models, as well as purely neural models, show poor generalization to a-priori unseen complex concepts. Our key insight is to factor inductive concept learning as (i) ${it Sketch:}$ detecting and inferring a coarse signature of a new concept (ii) ${it Plan:}$ performing MCTS search over grounded action sequences (iii) ${it Generalize:}$ abstracting out grounded plans as inductive programs. Our pipeline facilitates generalization and modular reuse, enabling continual concept learning. Our approach combines the benefits of the code generation ability of large language models (LLM) along with grounded neural representations, resulting in neuro-symbolic programs that show stronger inductive generalization on the task of constructing complex structures in relation to LLM-only and neural-only approaches. Furthermore, we demonstrate reasoning and planning capabilities with learned concepts for embodied instruction following.
Problem

Research questions and friction points this paper is trying to address.

Learning personalized concepts from few demonstrations
Generalizing to unseen complex spatial concepts
Guiding neuro-symbolic search using human demonstrations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sketch coarse concept signatures from demonstrations
Plan with MCTS guided by human demonstrations
Generalize grounded plans into inductive programs
🔎 Similar Papers
No similar papers found.
N
Namasivayam Kalithasan
IIT Delhi
S
Sachit Sachdeva
IIT Delhi
H
H. Singh
Work done when at IIT Delhi
V
Vishal Bindal
Work done when at IIT Delhi
A
Arnav Tuli
Work done when at IIT Delhi
G
Gurarmaan Singh Panjeta
IIT Delhi
Divyanshu Aggarwal
Divyanshu Aggarwal
IIT Delhi
Rohan Paul
Rohan Paul
IIT Delhi
Parag Singla
Parag Singla
Indian Institute of Technology Delhi
Neuro-Symbolic ReasoningMachine LearningArtificial Intelligence