DA2Diff: Exploring Degradation-aware Adaptive Diffusion Priors for All-in-One Weather Restoration

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world image degradation under adverse weather conditions—such as rain, snow, and fog—exhibits high heterogeneity and complexity, posing significant challenges for unified, robust restoration. Method: This paper proposes a CLIP-aware degradation-aware adaptive diffusion model that enables single-model, multi-weather restoration. It pioneers the use of the CLIP semantic space to explicitly model weather-specific degradations, introducing learnable weather prompts and a dynamic expert routing mechanism to jointly integrate degradation awareness, prompt-guided generation, and expert-adaptive scheduling. The architecture comprises CLIP-space prompt learning, conditional diffusion-based reconstruction, and a parallel multi-expert recovery framework. Contribution/Results: The model achieves state-of-the-art performance across multiple real-world weather datasets, consistently outperforming existing methods in both quantitative metrics (PSNR/SSIM) and perceptual quality.

Technology Category

Application Category

📝 Abstract
Image restoration under adverse weather conditions is a critical task for many vision-based applications. Recent all-in-one frameworks that handle multiple weather degradations within a unified model have shown potential. However, the diversity of degradation patterns across different weather conditions, as well as the complex and varied nature of real-world degradations, pose significant challenges for multiple weather removal. To address these challenges, we propose an innovative diffusion paradigm with degradation-aware adaptive priors for all-in-one weather restoration, termed DA2Diff. It is a new exploration that applies CLIP to perceive degradation-aware properties for better multi-weather restoration. Specifically, we deploy a set of learnable prompts to capture degradation-aware representations by the prompt-image similarity constraints in the CLIP space. By aligning the snowy/hazy/rainy images with snow/haze/rain prompts, each prompt contributes to different weather degradation characteristics. The learned prompts are then integrated into the diffusion model via the designed weather specific prompt guidance module, making it possible to restore multiple weather types. To further improve the adaptiveness to complex weather degradations, we propose a dynamic expert selection modulator that employs a dynamic weather-aware router to flexibly dispatch varying numbers of restoration experts for each weather-distorted image, allowing the diffusion model to restore diverse degradations adaptively. Experimental results substantiate the favorable performance of DA2Diff over state-of-the-arts in quantitative and qualitative evaluation. Source code will be available after acceptance.
Problem

Research questions and friction points this paper is trying to address.

Handling diverse weather degradation patterns in images
Adapting diffusion models for multi-weather restoration
Dynamic expert selection for complex weather degradations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses CLIP for degradation-aware multi-weather restoration
Integrates learned prompts into diffusion model
Employs dynamic expert selection for adaptiveness
🔎 Similar Papers
No similar papers found.
J
Jiamei Xiong
School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
Xuefeng Yan
Xuefeng Yan
Molecular Imaging Branch/National Institute of Mental Health/National Institutes of Health
Molecular imaging
Y
Yongzhen Wang
College of Computer Science and Technology, Anhui University of Technology, Ma'anshan 243099, China
W
Wei Zhao
School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
X
Xiao-Ping Zhang
Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Mingqiang Wei
Mingqiang Wei
Professor at Nanjing University of Aeronautics and Astronautics
3D VisionMultimodal FusionComputer GraphicsDeep Geometry LearningCAD