SGS-Intrinsic: Semantic-Invariant Gaussian Splatting for Sparse-View Indoor Inverse Rendering

📅 2026-03-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D Gaussian Splatting (3DGS)-based methods struggle to achieve high-quality indoor inverse rendering under sparse-view settings, particularly due to limitations in geometric reconstruction and the disentanglement of materials and lighting. This work proposes a semantics-preserving Gaussian splatting framework that leverages semantic and geometric priors to construct a dense, geometrically consistent Gaussian semantic field. By integrating a hybrid illumination model, illumination-invariant material constraints, and a shadow removal mechanism, the method effectively disentangles material and lighting components. Extensive experiments demonstrate that our approach significantly outperforms existing 3DGS techniques across multiple indoor benchmark datasets, achieving superior geometric fidelity and inverse rendering quality even with sparse input views.
📝 Abstract
We present SGS-Intrinsic, an indoor inverse rendering framework that works well for sparse-view images. Unlike existing 3D Gaussian Splatting (3DGS) based methods that focus on object-centric reconstruction and fail to work under sparse view settings, our method allows to achieve high-quality geometry reconstruction and accurate disentanglement of material and illumination. The core idea is to construct a dense and geometry-consistent Gaussian semantic field guided by semantic and geometric priors, providing a reliable foundation for subsequent inverse rendering. Building upon this, we perform material-illumination disentanglement by combining a hybrid illumination model and material prior to effectively capture illumination-material interactions. To mitigate the impact of cast shadows and enhance the robustness of material recovery, we introduce illumination-invariant material constraint together with a deshadowing model. Extensive experiments on benchmark datasets show that our method consistently improves both reconstruction fidelity and inverse rendering quality over existing 3DGS-based inverse rendering approaches. Our code is available at https://github.com/GrumpySloths/SGS_Intrinsic.github.io.
Problem

Research questions and friction points this paper is trying to address.

sparse-view
inverse rendering
material-illumination disentanglement
3D Gaussian Splatting
indoor scene reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gaussian Splatting
Inverse Rendering
Sparse-View Reconstruction
Material-Illumination Disentanglement
Semantic-Invariant Representation
🔎 Similar Papers
No similar papers found.
J
Jiahao Niu
Sun Yat-sen University, China
R
Rongjia Zheng
Sun Yat-sen University, China
W
Wenju Xu
Amazon, USA
Wei-Shi Zheng
Wei-Shi Zheng
Professor @ SUN YAT-SEN UNIVERSITY
Computer VisionPattern RecognitionMachine Learning
Q
Qing Zhang
Sun Yat-sen University, China; Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, China