Multi-Modal 3D Mesh Reconstruction from Images and Text

πŸ“… 2025-03-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address 6D pose estimation for unseen objects in robotic scenarios, this paper proposes a language-guided few-shot 3D mesh reconstruction method that requires neither pre-trained 3D models nor large-scale annotated datasets. Given only 3–5 multi-view input images and a natural language query, the method first employs GroundingDINO and SAM for semantic-aware segmentation, then leverages VGGSfM to reconstruct a sparse point cloud, and finally generates a Gaussian splatting representation via SuGARβ€”jointly optimizing geometry and texture to produce high-fidelity meshes. To our knowledge, this is the first approach achieving cross-modal (text + vision) few-shot 3D reconstruction under zero-training conditions. It significantly outperforms existing methods in geometric accuracy and texture fidelity. Ablation studies systematically evaluate the impact of view distribution, image count, and overlap on reconstruction quality and efficiency, demonstrating strong generalization and practical deployment potential.

Technology Category

Application Category

πŸ“ Abstract
6D object pose estimation for unseen objects is essential in robotics but traditionally relies on trained models that require large datasets, high computational costs, and struggle to generalize. Zero-shot approaches eliminate the need for training but depend on pre-existing 3D object models, which are often impractical to obtain. To address this, we propose a language-guided few-shot 3D reconstruction method, reconstructing a 3D mesh from few input images. In the proposed pipeline, receives a set of input images and a language query. A combination of GroundingDINO and Segment Anything Model outputs segmented masks from which a sparse point cloud is reconstructed with VGGSfM. Subsequently, the mesh is reconstructed with the Gaussian Splatting method SuGAR. In a final cleaning step, artifacts are removed, resulting in the final 3D mesh of the queried object. We evaluate the method in terms of accuracy and quality of the geometry and texture. Furthermore, we study the impact of imaging conditions such as viewing angle, number of input images, and image overlap on 3D object reconstruction quality, efficiency, and computational scalability.
Problem

Research questions and friction points this paper is trying to address.

Zero-shot 6D pose estimation for unseen objects
Language-guided few-shot 3D mesh reconstruction
Impact of imaging conditions on reconstruction quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language-guided few-shot 3D reconstruction
Combines GroundingDINO and Segment Anything Model
Uses Gaussian Splatting for mesh reconstruction
πŸ”Ž Similar Papers
No similar papers found.
M
Melvin Reka
Automation and Control Institute, TU Wien Vienna, Austria
T
Tessa Pulli
Automation and Control Institute, TU Wien Vienna, Austria
Markus Vincze
Markus Vincze
TU Wien
Robot visionhome roboticsmaking robots see