🤖 AI Summary
This work introduces the first annotation-free open-vocabulary 3D instance segmentation framework, addressing zero-shot 3D localization and segmentation of arbitrary object categories. Methodologically, it pioneers the integration of 3D Gaussian Splatting (3DGS) with vision-language understanding: CLIP text embeddings are aligned to individual 3D Gaussian ellipsoids via feature splatting; SAM-guided contrastive learning leverages LERF-style mask supervision to refine cross-modal alignment; and a novel text-geometry joint loss enables end-to-end semantic-to-3D spatial mapping. Experiments on LERF-mask, LERF-OVS, and ScanNet++ demonstrate substantial improvements in generalization to unseen categories and segmentation accuracy. The framework establishes a scalable, annotation-free paradigm for open-world 3D scene understanding.
📝 Abstract
3D Gaussian Splatting (3DGS) has emerged as a powerful representation for neural scene reconstruction, offering high-quality novel view synthesis while maintaining computational efficiency. In this paper, we extend the capabilities of 3DGS beyond pure scene representation by introducing an approach for open-vocabulary 3D instance segmentation without requiring manual labeling, termed OpenSplat3D. Our method leverages feature-splatting techniques to associate semantic information with individual Gaussians, enabling fine-grained scene understanding. We incorporate Segment Anything Model instance masks with a contrastive loss formulation as guidance for the instance features to achieve accurate instance-level segmentation. Furthermore, we utilize language embeddings of a vision-language model, allowing for flexible, text-driven instance identification. This combination enables our system to identify and segment arbitrary objects in 3D scenes based on natural language descriptions. We show results on LERF-mask and LERF-OVS as well as the full ScanNet++ validation set, demonstrating the effectiveness of our approach.