π€ AI Summary
This work addresses the challenging problem of novel view synthesis beneath non-planar refractive water surfaces, where spatially varying optical distortions invalidate the straight-ray assumption of conventional methods, leading to severe artifacts. To overcome this limitation, we propose an end-to-end joint optimization framework that models the dynamic water surface via a neural height field and represents the underwater scene using a 3D Gaussian field. Crucially, we decouple the 3D Gaussian splatting from the explicit refractive surface and introduce, for the first time, a differentiable, refraction-aware Gaussian ray tracing formulation grounded in Snellβs law, enabling efficient rendering along nonlinear light paths. Our method significantly outperforms existing approaches on both synthetic and real-world scenes with complex water surfaces, achieves a 15Γ faster training speed, and enables real-time, high-fidelity, view-consistent novel view synthesis at 200 FPS.
π Abstract
Novel view synthesis (NVS) through non-planar refractive surfaces presents fundamental challenges due to severe, spatially varying optical distortions. While recent representations like NeRF and 3D Gaussian Splatting (3DGS) excel at NVS, their assumption of straight-line ray propagation fails under these conditions, leading to significant artifacts. To overcome this limitation, we introduce RefracGS, a framework that jointly reconstructs the refractive water surface and the scene beneath the interface. Our key insight is to explicitly decouple the refractive boundary from the target objects: the refractive surface is modeled via a neural height field, capturing wave geometry, while the underlying scene is represented as a 3D Gaussian field. We formulate a refraction-aware Gaussian ray tracing approach that accurately computes non-linear ray trajectories using Snell's law and efficiently renders the underlying Gaussian field while backpropagating the loss gradients to the parameterized refractive surface. Through end-to-end joint optimization of both representations, our method ensures high-fidelity NVS and view-consistent surface recovery. Experiments on both synthetic and real-world scenes with complex waves demonstrate that RefracGS outperforms prior refractive methods in visual quality, while achieving 15x faster training and real-time rendering at 200 FPS. The project page for RefracGS is available at https://yimgshao.github.io/refracgs/.