🤖 AI Summary
This work proposes a self-supervised 3D Gaussian Splatting-based reconstruction method to address multi-view inconsistency and floating artifacts caused by underwater optical degradation. By integrating trinocular view consistency constraints, epipolar depth priors derived from triangulation, and a depth-aware opacity modulation mechanism, the approach effectively decouples scene geometry from the effects of underwater scattering media. Evaluated on both real-world and simulated underwater scenes, the method significantly outperforms existing techniques, achieving high-quality geometric reconstruction and scattering removal while substantially suppressing floating artifacts.
📝 Abstract
We introduce OceanSplat, a novel 3D Gaussian Splatting-based approach for high-fidelity underwater scene reconstruction. To overcome multi-view inconsistencies caused by scattering media, we design a trinocular setup for each camera pose by rendering from horizontally and vertically translated virtual viewpoints, enforcing view consistency to facilitate spatial optimization of 3D Gaussians. Furthermore, we derive synthetic epipolar depth priors from the virtual viewpoints, which serve as self-supervised depth regularizers to compensate for the limited geometric cues in degraded underwater scenes. We also propose a depth-aware alpha adjustment that modulates the opacity of 3D Gaussians during early training based on their depth along the viewing direction, deterring the formation of medium-induced primitives. Our approach promotes the disentanglement of 3D Gaussians from the scattering medium through effective geometric constraints, enabling accurate representation of scene structure and significantly reducing floating artifacts. Experiments on real-world underwater and simulated scenes demonstrate that OceanSplat substantially outperforms existing methods for both scene reconstruction and restoration in scattering media.