🤖 AI Summary
This work addresses the multi-view inconsistency problem arising when distilling 2D CLIP features into 3D Gaussian splatting for open-vocabulary 3D scene understanding. Methodologically, we propose a granularity-aware feature distillation framework that jointly employs SAM-guided prompt point density adaptation and unsupervised granularity factor learning to enforce stable, consistent semantic supervision over 3D Gaussian splats. Crucially, we co-optimize CLIP’s vision-language priors with 3D geometric modeling, enabling arbitrary-view open-vocabulary querying and pixel-accurate localization. Our approach achieves significant improvements over baselines in both visual grounding and semantic segmentation—particularly in cross-view consistency—while doubling inference speed. The resulting paradigm offers an efficient, robust solution for open-vocabulary 3D understanding.
📝 Abstract
3D open-vocabulary scene understanding, which accurately perceives complex semantic properties of objects in space, has gained significant attention in recent years. In this paper, we propose GAGS, a framework that distills 2D CLIP features into 3D Gaussian splatting, enabling open-vocabulary queries for renderings on arbitrary viewpoints. The main challenge of distilling 2D features for 3D fields lies in the multiview inconsistency of extracted 2D features, which provides unstable supervision for the 3D feature field. GAGS addresses this challenge with two novel strategies. First, GAGS associates the prompt point density of SAM with the camera distances, which significantly improves the multiview consistency of segmentation results. Second, GAGS further decodes a granularity factor to guide the distillation process and this granularity factor can be learned in a unsupervised manner to only select the multiview consistent 2D features in the distillation process. Experimental results on two datasets demonstrate significant performance and stability improvements of GAGS in visual grounding and semantic segmentation, with an inference speed 2$ imes$ faster than baseline methods. The code and additional results are available at https://pz0826.github.io/GAGS-Webpage/ .