GravMAD: Grounded Spatial Value Maps Guided Action Diffusion for Generalized 3D Manipulation

πŸ“… 2024-09-30
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenges of zero-shot task generalization and insufficient 3D environmental understanding in language-instructed general-purpose robotic manipulation, this paper proposes an end-to-end action generation framework guided by subgoal decomposition and spatial value mapping. Methodologically, it integrates (1) a novel subgoal key-pose discovery mechanism leveraging pretrained multimodal models (CLIP/ViT); (2) GravMapsβ€”a learnable, semantically aligned 3D spatial value map that replaces fixed anchors; and (3) a hybrid architecture combining imitation learning with conditional diffusion modeling to enable subgoal-driven hierarchical spatial reasoning and temporally coherent action sequence generation. Evaluated on RLBench, the method achieves +28.63% success rate improvement on unseen tasks and +13.36% on seen tasks. Real-robot experiments validate its capability for joint vision-language-action reasoning and strong generalization to real-world scenarios.

Technology Category

Application Category

πŸ“ Abstract
Robots' ability to follow language instructions and execute diverse 3D manipulation tasks is vital in robot learning. Traditional imitation learning-based methods perform well on seen tasks but struggle with novel, unseen ones due to variability. Recent approaches leverage large foundation models to assist in understanding novel tasks, thereby mitigating this issue. However, these methods lack a task-specific learning process, which is essential for an accurate understanding of 3D environments, often leading to execution failures. In this paper, we introduce GravMAD, a sub-goal-driven, language-conditioned action diffusion framework that combines the strengths of imitation learning and foundation models. Our approach breaks tasks into sub-goals based on language instructions, allowing auxiliary guidance during both training and inference. During training, we introduce Sub-goal Keypose Discovery to identify key sub-goals from demonstrations. Inference differs from training, as there are no demonstrations available, so we use pre-trained foundation models to bridge the gap and identify sub-goals for the current task. In both phases, GravMaps are generated from sub-goals, providing GravMAD with more flexible 3D spatial guidance compared to fixed 3D positions. Empirical evaluations on RLBench show that GravMAD significantly outperforms state-of-the-art methods, with a 28.63% improvement on novel tasks and a 13.36% gain on tasks encountered during training. Evaluations on real-world robotic tasks further show that GravMAD can reason about real-world tasks, associate them with relevant visual information, and generalize to novel tasks. These results demonstrate GravMAD's strong multi-task learning and generalization in 3D manipulation. Video demonstrations are available at: https://gravmad.github.io.
Problem

Research questions and friction points this paper is trying to address.

Enhance robots' 3D task execution via language instructions.
Overcome limitations of imitation learning in novel tasks.
Integrate sub-goal-driven action diffusion for spatial guidance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sub-goal-driven action diffusion
Language-conditioned framework
GravMaps for 3D spatial guidance
πŸ”Ž Similar Papers
No similar papers found.
Yangtao Chen
Yangtao Chen
Master Student, Nanjing University, China
EmbodiedAIRobotics
Z
Zixuan Chen
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
J
Junhui Yin
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
Jing Huo
Jing Huo
Nanjing University
Machine LearningComputer Vision
P
Pinzhuo Tian
School of Computer Engineering and Science, Shanghai University, Shanghai, China
J
Jieqi Shi
School of Intelligence Science and Technology, Nanjing University (Suzhou Campus), Nanjing, China
Y
Yang Gao
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China; School of Intelligence Science and Technology, Nanjing University (Suzhou Campus), Nanjing, China