π€ AI Summary
To address 6D pose estimation for unseen objects in robotic scenarios, this paper proposes a language-guided few-shot 3D mesh reconstruction method that requires neither pre-trained 3D models nor large-scale annotated datasets. Given only 3β5 multi-view input images and a natural language query, the method first employs GroundingDINO and SAM for semantic-aware segmentation, then leverages VGGSfM to reconstruct a sparse point cloud, and finally generates a Gaussian splatting representation via SuGARβjointly optimizing geometry and texture to produce high-fidelity meshes. To our knowledge, this is the first approach achieving cross-modal (text + vision) few-shot 3D reconstruction under zero-training conditions. It significantly outperforms existing methods in geometric accuracy and texture fidelity. Ablation studies systematically evaluate the impact of view distribution, image count, and overlap on reconstruction quality and efficiency, demonstrating strong generalization and practical deployment potential.
π Abstract
6D object pose estimation for unseen objects is essential in robotics but traditionally relies on trained models that require large datasets, high computational costs, and struggle to generalize. Zero-shot approaches eliminate the need for training but depend on pre-existing 3D object models, which are often impractical to obtain. To address this, we propose a language-guided few-shot 3D reconstruction method, reconstructing a 3D mesh from few input images. In the proposed pipeline, receives a set of input images and a language query. A combination of GroundingDINO and Segment Anything Model outputs segmented masks from which a sparse point cloud is reconstructed with VGGSfM. Subsequently, the mesh is reconstructed with the Gaussian Splatting method SuGAR. In a final cleaning step, artifacts are removed, resulting in the final 3D mesh of the queried object. We evaluate the method in terms of accuracy and quality of the geometry and texture. Furthermore, we study the impact of imaging conditions such as viewing angle, number of input images, and image overlap on 3D object reconstruction quality, efficiency, and computational scalability.