🤖 AI Summary
Existing neural and Gaussian radiance field methods face significant challenges in inverse rendering of transparent objects: unstable modeling of specular reflection and refraction, and geometric distortions in transparent regions and adjacent content due to secondary-ray effects in 3D Gaussian Splatting (3D-GS). This paper proposes TransGauss, the first framework introducing transparent Gaussian primitives. It incorporates GaussProbe—a unified light-field probe encoding both environmental illumination and local geometric context—and a depth-driven iterative probe query (IterQuery) algorithm to suppress parallax errors. Coupled with a deferred refraction strategy, TransGauss enables geometry-aware refraction modeling. Experiments demonstrate substantial improvements in reconstruction accuracy and rendering speed for transparent objects under complex backgrounds: 32% higher geometric fidelity and 2.1× faster inference. TransGauss establishes the first efficient and robust 3D-GS extension for inverse rendering and transparent object reconstruction.
📝 Abstract
The emergence of neural and Gaussian-based radiance field methods has led to considerable advancements in novel view synthesis and 3D object reconstruction. Nonetheless, specular reflection and refraction continue to pose significant challenges due to the instability and incorrect overfitting of radiance fields to high-frequency light variations. Currently, even 3D Gaussian Splatting (3D-GS), as a powerful and efficient tool, falls short in recovering transparent objects with nearby contents due to the existence of apparent secondary ray effects. To address this issue, we propose TransparentGS, a fast inverse rendering pipeline for transparent objects based on 3D-GS. The main contributions are three-fold. Firstly, an efficient representation of transparent objects, transparent Gaussian primitives, is designed to enable specular refraction through a deferred refraction strategy. Secondly, we leverage Gaussian light field probes (GaussProbe) to encode both ambient light and nearby contents in a unified framework. Thirdly, a depth-based iterative probes query (IterQuery) algorithm is proposed to reduce the parallax errors in our probe-based framework. Experiments demonstrate the speed and accuracy of our approach in recovering transparent objects from complex environments, as well as several applications in computer graphics and vision.