🤖 AI Summary
General-purpose robots face a combinatorial generalization bottleneck: pretrained skills exhibit insufficient robustness under distribution shifts induced by novel combinations of environmental scenes. To address this, we propose a scene-graph-based atomic skill learning framework that explicitly models task-critical objects and their relational structure via attention mechanisms and graph neural networks (GNNs), thereby enhancing skill adaptability to environmental variations. We further integrate a vision-language-model-driven high-level planner, establishing a synergistic paradigm of “structure-aware skill execution” and “semantically guided task decomposition.” The system unifies GNNs, diffusion-model-based imitation learning, and multimodal large language models into an end-to-end scene-graph skill learning and planning architecture. Evaluated on long-horizon manipulation tasks in both simulation and real-world settings, our approach significantly outperforms state-of-the-art methods, achieving substantial gains in task success rate and demonstrating superior robustness and combinatorial generalization.
📝 Abstract
A key requirement for generalist robots is compositional generalization - the ability to combine atomic skills to solve complex, long-horizon tasks. While prior work has primarily focused on synthesizing a planner that sequences pre-learned skills, robust execution of the individual skills themselves remains challenging, as visuomotor policies often fail under distribution shifts induced by scene composition. To address this, we introduce a scene graph-based representation that focuses on task-relevant objects and relations, thereby mitigating sensitivity to irrelevant variation. Building on this idea, we develop a scene-graph skill learning framework that integrates graph neural networks with diffusion-based imitation learning, and further combine "focused" scene-graph skills with a vision-language model (VLM) based task planner. Experiments in both simulation and real-world manipulation tasks demonstrate substantially higher success rates than state-of-the-art baselines, highlighting improved robustness and compositional generalization in long-horizon tasks.