🤖 AI Summary
Existing 3D Gaussian splatting methods struggle to model hierarchical semantic structures and part-whole relationships in complex scenes, and their reliance on 2D priors often leads to inconsistent cross-view labels, limiting segmentation performance. To address these limitations, this work proposes a tree-guided cascaded contrastive learning framework that explicitly constructs a multi-level object hierarchy to capture semantic structure. The framework incorporates a cascaded contrastive learning mechanism to reduce supervisory redundancy and integrates a consistency-aware segmentation refinement module with a graph neural network-based denoising component to enhance the stability and robustness of cross-view segmentation. Experiments demonstrate that the proposed method significantly outperforms existing approaches on open-vocabulary 3D object selection and point cloud understanding tasks, achieving superior segmentation consistency, quality, and structural awareness.
📝 Abstract
3D Gaussian Splatting (3DGS) has emerged as a real-time, differentiable representation for neural scene understanding. However, existing 3DGS-based methods struggle to represent hierarchical 3D semantic structures and capture whole-part relationships in complex scenes. Moreover, dense pairwise comparisons and inconsistent hierarchical labels from 2D priors hinder feature learning, resulting in suboptimal segmentation. To address these limitations, we introduce TreeGaussian, a tree-guided cascaded contrastive learning framework that explicitly models hierarchical semantic relationships and reduces redundancy in contrastive supervision. By constructing a multi-level object tree, TreeGaussian enables structured learning across object-part hierarchies. In addition, we propose a two-stage cascaded contrastive learning strategy that progressively refines feature representations from global to local, mitigating saturation and stabilizing training. A Consistent Segmentation Detection (CSD) mechanism and a graph-based denoising module are further introduced to align segmentation modes across views while suppressing unstable Gaussian points, enhancing segmentation consistency and quality. Extensive experiments, including open-vocabulary 3D object selection, 3D point cloud understanding, and ablation studies, demonstrate the effectiveness and robustness of our approach.