Compose by Focus: Scene Graph-based Atomic Skills

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
General-purpose robots face a combinatorial generalization bottleneck: pretrained skills exhibit insufficient robustness under distribution shifts induced by novel combinations of environmental scenes. To address this, we propose a scene-graph-based atomic skill learning framework that explicitly models task-critical objects and their relational structure via attention mechanisms and graph neural networks (GNNs), thereby enhancing skill adaptability to environmental variations. We further integrate a vision-language-model-driven high-level planner, establishing a synergistic paradigm of “structure-aware skill execution” and “semantically guided task decomposition.” The system unifies GNNs, diffusion-model-based imitation learning, and multimodal large language models into an end-to-end scene-graph skill learning and planning architecture. Evaluated on long-horizon manipulation tasks in both simulation and real-world settings, our approach significantly outperforms state-of-the-art methods, achieving substantial gains in task success rate and demonstrating superior robustness and combinatorial generalization.

Technology Category

Application Category

📝 Abstract
A key requirement for generalist robots is compositional generalization - the ability to combine atomic skills to solve complex, long-horizon tasks. While prior work has primarily focused on synthesizing a planner that sequences pre-learned skills, robust execution of the individual skills themselves remains challenging, as visuomotor policies often fail under distribution shifts induced by scene composition. To address this, we introduce a scene graph-based representation that focuses on task-relevant objects and relations, thereby mitigating sensitivity to irrelevant variation. Building on this idea, we develop a scene-graph skill learning framework that integrates graph neural networks with diffusion-based imitation learning, and further combine "focused" scene-graph skills with a vision-language model (VLM) based task planner. Experiments in both simulation and real-world manipulation tasks demonstrate substantially higher success rates than state-of-the-art baselines, highlighting improved robustness and compositional generalization in long-horizon tasks.
Problem

Research questions and friction points this paper is trying to address.

Addressing compositional generalization in generalist robots
Mitigating visuomotor policy failures under scene composition shifts
Integrating scene-graph skills with VLM-based task planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scene graph-based representation focusing task-relevant objects
Graph neural networks integrated diffusion imitation learning
Combining scene-graph skills with VLM-based task planner
🔎 Similar Papers
No similar papers found.
H
Han Qi
School of Engineering and Applied Sciences, Harvard University
Changhe Chen
Changhe Chen
University of Michigan
RoboticsEmbodied AIManipulationAutonomous Driving
H
Heng Yang
School of Engineering and Applied Sciences, Harvard University