🤖 AI Summary
Existing Referring 3D Gaussian Splatting (R3DGS) methods rely on 2D rendering-based pseudo-supervision and single-view feature learning, leading to cross-view semantic inconsistency. To address this, we propose Camera-aware Referring Fields (CaRF), a novel framework that directly models language-geometry relationships within the 3D Gaussian field. CaRF explicitly encodes camera geometry to capture viewpoint variations, introduces paired-view supervision and multi-view logits alignment to mitigate overfitting to 2D pseudo-labels, and integrates differentiable 3D Gaussian rendering with cross-modal language-geometry alignment. Evaluated on Ref-LERF, LERF-OVS, and 3D-OVS benchmarks, CaRF achieves absolute mIoU improvements of 16.8%, 4.3%, and 2.0%, respectively. These gains demonstrate significantly enhanced cross-view consistency and robustness in 3D scene understanding.
📝 Abstract
Referring 3D Gaussian Splatting Segmentation (R3DGS) aims to interpret free-form language expressions and localize the corresponding 3D regions in Gaussian fields. While recent advances have introduced cross-modal alignment between language and 3D geometry, existing pipelines still struggle with cross-view consistency due to their reliance on 2D rendered pseudo supervision and view specific feature learning. In this work, we present Camera Aware Referring Field (CaRF), a fully differentiable framework that operates directly in the 3D Gaussian space and achieves multi view consistency. Specifically, CaRF introduces Gaussian Field Camera Encoding (GFCE), which incorporates camera geometry into Gaussian text interactions to explicitly model view dependent variations and enhance geometric reasoning. Building on this, In Training Paired View Supervision (ITPVS) is proposed to align per Gaussian logits across calibrated views during training, effectively mitigating single view overfitting and exposing inter view discrepancies for optimization. Extensive experiments on three representative benchmarks demonstrate that CaRF achieves average improvements of 16.8%, 4.3%, and 2.0% in mIoU over state of the art methods on the Ref LERF, LERF OVS, and 3D OVS datasets, respectively. Moreover, this work promotes more reliable and view consistent 3D scene understanding, with potential benefits for embodied AI, AR/VR interaction, and autonomous perception.