Joint Semantic and Rendering Enhancements in 3D Gaussian Modeling with Anisotropic Local Encoding

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D Gaussian semantic modeling approaches typically decouple semantics and rendering, relying solely on 2D supervision while neglecting 3D geometric structure and exhibiting limited adaptability in textureless regions. This work proposes a unified framework that jointly optimizes semantics and rendering within a 3D Gaussian representation. It introduces an anisotropic Chebyshev descriptor based on the Laplace-Beltrami operator to capture fine-grained 3D shape details, enabling adaptive refinement of Gaussian parameters and spherical harmonics through synergistic semantic and geometric signals. Furthermore, a cross-scene knowledge transfer module is designed to accelerate convergence and enhance generalization. Experiments demonstrate that the proposed method consistently improves both semantic segmentation accuracy and photorealistic rendering quality across multiple datasets, all while maintaining real-time frame rates.

Technology Category

Application Category

📝 Abstract
Recent works propose extending 3DGS with semantic feature vectors for simultaneous semantic segmentation and image rendering. However, these methods often treat the semantic and rendering branches separately, relying solely on 2D supervision while ignoring the 3D Gaussian geometry. Moreover, current adaptive strategies adapt the Gaussian set depending solely on rendering gradients, which can be insufficient in subtle or textureless regions. In this work, we propose a joint enhancement framework for 3D semantic Gaussian modeling that synergizes both semantic and rendering branches. Firstly, unlike conventional point cloud shape encoding, we introduce an anisotropic 3D Gaussian Chebyshev descriptor using the Laplace-Beltrami operator to capture fine-grained 3D shape details, thereby distinguishing objects with similar appearances and reducing reliance on potentially noisy 2D guidance. In addition, without relying solely on rendering gradient, we adaptively adjust Gaussian allocation and spherical harmonics with local semantic and shape signals, enhancing rendering efficiency through selective resource allocation. Finally, we employ a cross-scene knowledge transfer module to continuously update learned shape patterns, enabling faster convergence and robust representations without relearning shape information from scratch for each new scene. Experiments on multiple datasets demonstrate improvements in segmentation accuracy and rendering quality while maintaining high rendering frame rates.
Problem

Research questions and friction points this paper is trying to address.

3D Gaussian Splatting
semantic segmentation
rendering
anisotropic encoding
3D geometry
Innovation

Methods, ideas, or system contributions that make the work stand out.

anisotropic Gaussian encoding
joint semantic-rendering optimization
Laplace-Beltrami descriptor
adaptive resource allocation
cross-scene knowledge transfer
🔎 Similar Papers
No similar papers found.