🤖 AI Summary
Existing inverse rendering methods based on Gaussian splatting struggle to accurately disentangle material properties from complex global illumination—particularly indirect lighting—due to supervision being limited to observed viewpoints. This work proposes a radiance consistency constraint that, for the first time, integrates physics-based rendering into the Gaussian splatting framework. By minimizing the residual between learned radiance and physically rendered results, the method provides self-correcting supervision for unobserved views. Leveraging a Gaussian surfel representation combined with 2D Gaussian ray tracing, we construct an efficient inverse rendering system that enables rapid relighting fine-tuning. Our approach outperforms existing Gaussian-based methods across multiple benchmarks, achieving relighting adaptation in just a few minutes and rendering at under 10 milliseconds per frame, thus balancing accuracy and efficiency.
📝 Abstract
Inverse rendering with Gaussian Splatting has advanced rapidly, but accurately disentangling material properties from complex global illumination effects, particularly indirect illumination, remains a major challenge. Existing methods often query indirect radiance from Gaussian primitives pre-trained for novel-view synthesis. However, these pre-trained Gaussian primitives are supervised only towards limited training viewpoints, thus lack supervision for modeling indirect radiances from unobserved views. To address this issue, we introduce radiometric consistency, a novel physically-based constraint that provides supervision towards unobserved views by minimizing the residual between each Gaussian primitive's learned radiance and its physically-based rendered counterpart. Minimizing the residual for unobserved views establishes a self-correcting feedback loop that provides supervision from both physically-based rendering and novel-view synthesis, enabling accurate modeling of inter-reflection. We then propose Radiometrically Consistent Gaussian Surfels (RadioGS), an inverse rendering framework built upon our principle by efficiently integrating radiometric consistency by utilizing Gaussian surfels and 2D Gaussian ray tracing. We further propose a finetuning-based relighting strategy that adapts Gaussian surfel radiances to new illuminations within minutes, achieving low rendering cost (<10ms). Extensive experiments on existing inverse rendering benchmarks show that RadioGS outperforms existing Gaussian-based methods in inverse rendering, while retaining the computational efficiency.