🤖 AI Summary
This work addresses the limited generalization of existing end-to-end robotic manipulation methods to unseen objects or tasks, which often stems from insufficient geometric and spatial understanding. To overcome this, the authors propose a retrieval-augmented generation framework that integrates 3D Gaussian Splatting with a multimodal large language model. The approach leverages hierarchical multimodal retrieval to align the 3D poses of reference and target objects and subsequently optimizes manipulation parameters. Notably, this is the first integration of 3D Gaussian Splatting into a retrieval-augmented architecture, enabling seamless coupling of semantic reasoning and geometric execution. Evaluated on a test set comprising 30 categories of household objects, the method achieves a 7.76% improvement in zero-shot success rate over the strongest baseline and even surpasses the current best supervised approach by 6.54%, while enhancing interpretability.
📝 Abstract
Existing end-to-end approaches of robotic manipulation often lack generalization to unseen objects or tasks due to limited data and poor interpretability. While recent Multimodal Large Language Models (MLLMs) demonstrate strong commonsense reasoning, they struggle with geometric and spatial understanding required for pose prediction. In this paper, we propose RobMRAG, a 3D Gaussian Splatting-Enhanced Multimodal Retrieval-Augmented Generation (MRAG) framework for zero-shot robotic manipulation. Specifically, we construct a multi-source manipulation knowledge base containing object contact frames, task completion frames, and pose parameters. During inference, a Hierarchical Multimodal Retrieval module first employs a three-priority hybrid retrieval strategy to find task-relevant object prototypes, then selects the geometrically closest reference example based on pixel-level similarity and Instance Matching Distance (IMD). We further introduce a 3D-Aware Pose Refinement module based on 3D Gaussian Splatting into the MRAG framework, which aligns the pose of the reference object to the target object in 3D space. The aligned results are reprojected onto the image plane and used as input to the MLLM to enhance the generation of the final pose parameters. Extensive experiments show that on a test set containing 30 categories of household objects, our method improves the success rate by 7.76% compared to the best-performing zero-shot baseline under the same setting, and by 6.54% compared to the state-of-the-art supervised baseline. Our results validate that RobMRAG effectively bridges the gap between high-level semantic reasoning and low-level geometric execution, enabling robotic systems that generalize to unseen objects while remaining inherently interpretable.