Single-Step Latent Diffusion for Underwater Image Restoration

πŸ“… 2025-07-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing pixel-space diffusion methods suffer from high computational cost and artifact generation in underwater scenes with complex geometry and large depth variations. This paper proposes SLURPPβ€”the first work to adapt pre-trained latent diffusion models (LDMs) for underwater image restoration. SLURPP explicitly models scene depth heterogeneity via geometric decomposition and integrates a physics-informed underwater imaging synthesis pipeline to jointly correct light attenuation and backscatter. The method enables single-step, efficient denoising without iterative sampling. Evaluated on both synthetic and real-world datasets, SLURPP achieves state-of-the-art performance: it improves PSNR by approximately 3 dB, accelerates inference by over 200Γ— compared to existing diffusion-based methods, and yields restored images with accurate color fidelity, natural contrast, and high visual realism.

Technology Category

Application Category

πŸ“ Abstract
Underwater image restoration algorithms seek to restore the color, contrast, and appearance of a scene that is imaged underwater. They are a critical tool in applications ranging from marine ecology and aquaculture to underwater construction and archaeology. While existing pixel-domain diffusion-based image restoration approaches are effective at restoring simple scenes with limited depth variation, they are computationally intensive and often generate unrealistic artifacts when applied to scenes with complex geometry and significant depth variation. In this work we overcome these limitations by combining a novel network architecture (SLURPP) with an accurate synthetic data generation pipeline. SLURPP combines pretrained latent diffusion models -- which encode strong priors on the geometry and depth of scenes -- with an explicit scene decomposition -- which allows one to model and account for the effects of light attenuation and backscattering. To train SLURPP we design a physics-based underwater image synthesis pipeline that applies varied and realistic underwater degradation effects to existing terrestrial image datasets. This approach enables the generation of diverse training data with dense medium/degradation annotations. We evaluate our method extensively on both synthetic and real-world benchmarks and demonstrate state-of-the-art performance. Notably, SLURPP is over 200X faster than existing diffusion-based methods while offering ~ 3 dB improvement in PSNR on synthetic benchmarks. It also offers compelling qualitative improvements on real-world data. Project website https://tianfwang.github.io/slurpp/.
Problem

Research questions and friction points this paper is trying to address.

Restore underwater images with color and contrast
Overcome computational intensity in complex scenes
Reduce artifacts in varied depth scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines pretrained latent diffusion models
Uses explicit scene decomposition technique
Physics-based synthetic data generation pipeline
πŸ”Ž Similar Papers
No similar papers found.