🤖 AI Summary
Addressing the challenge of simultaneously achieving photorealistic water properties and geometric consistency in underwater novel view synthesis, this paper proposes a geometry-preserving water-style transfer method grounded in optical physics modeling. Our approach explicitly preserves input depth maps and object geometry during cross-scene water property transfer by integrating closed-form underwater light propagation modeling with depth-constrained optimization. Unlike conventional data-driven paradigms, we are the first to embed structural similarity preservation directly into the rendering pipeline, enabling geometry-aware data augmentation. Experiments demonstrate that synthesized images achieve depth consistency exceeding 94%, structural similarity (SSIM) of 0.90–0.95, and support high-fidelity, 3D-consistent underwater novel view synthesis.
📝 Abstract
We introduce the idea of AquaFuse, a physics-based method for synthesizing waterbody properties in underwater imagery. We formulate a closed-form solution for waterbody fusion that facilitates realistic data augmentation and geometrically consistent underwater scene rendering. AquaFuse leverages the physical characteristics of light propagation underwater to synthesize the waterbody from one scene to the object contents of another. Unlike data-driven style transfer, AquaFuse preserves the depth consistency and object geometry in an input scene. We validate this unique feature by comprehensive experiments over diverse underwater scenes. We find that the AquaFused images preserve over 94% depth consistency and 90-95% structural similarity of the input scenes. We also demonstrate that it generates accurate 3D view synthesis by preserving object geometry while adapting to the inherent waterbody fusion process. AquaFuse opens up a new research direction in data augmentation by geometry-preserving style transfer for underwater imaging and robot vision applications.