Exploiting Diffusion Prior for Real-World Image Dehazing with Unpaired Training

📅 2025-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenging generalization problem of single-image dehazing under real-world conditions—where paired hazy and haze-free images are unavailable—this paper proposes Diff-Dehazer, the first unpaired dehazing framework integrating diffusion priors. Methodologically, it innovatively embeds a pre-trained diffusion model as a bidirectional mapping learner into CycleGAN, jointly leveraging physical imaging priors and a cross-modal (image + text) degradation modeling mechanism to explicitly capture real-world haze distributions and semantic knowledge. Extensive experiments on multiple real-world datasets demonstrate that Diff-Dehazer significantly outperforms existing state-of-the-art methods, with both qualitative and quantitative results confirming its superior generalization capability and robustness across diverse haze conditions. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Unpaired training has been verified as one of the most effective paradigms for real scene dehazing by learning from unpaired real-world hazy and clear images. Although numerous studies have been proposed, current methods demonstrate limited generalization for various real scenes due to limited feature representation and insufficient use of real-world prior. Inspired by the strong generative capabilities of diffusion models in producing both hazy and clear images, we exploit diffusion prior for real-world image dehazing, and propose an unpaired framework named Diff-Dehazer. Specifically, we leverage diffusion prior as bijective mapping learners within the CycleGAN, a classic unpaired learning framework. Considering that physical priors contain pivotal statistics information of real-world data, we further excavate real-world knowledge by integrating physical priors into our framework. Furthermore, we introduce a new perspective for adequately leveraging the representation ability of diffusion models by removing degradation in image and text modalities, so as to improve the dehazing effect. Extensive experiments on multiple real-world datasets demonstrate the superior performance of our method. Our code https://github.com/ywxjm/Diff-Dehazer.
Problem

Research questions and friction points this paper is trying to address.

Addresses limited generalization in real-world image dehazing.
Exploits diffusion models for better feature representation.
Integrates physical priors to enhance dehazing performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages diffusion prior for image dehazing
Integrates physical priors into CycleGAN framework
Removes degradation in image and text modalities
🔎 Similar Papers
No similar papers found.
Y
Yunwei Lan
University of Science and Technology of China
Z
Zhigao Cui
Rocket Force University of Engineering
C
Chang Liu
University of Science and Technology of China
Jialun Peng
Jialun Peng
University of Science and Technology of China
Nian Wang
Nian Wang
UT Southwestern Medical Center
Brain MRIKnee MRIQSMDiffusion MRITractography
Xin Luo
Xin Luo
University of Science and Technology of China
Computer Vision
D
Dong Liu
University of Science and Technology of China