🤖 AI Summary
Real-world image degradation under adverse weather conditions—such as rain, snow, and fog—exhibits high heterogeneity and complexity, posing significant challenges for unified, robust restoration. Method: This paper proposes a CLIP-aware degradation-aware adaptive diffusion model that enables single-model, multi-weather restoration. It pioneers the use of the CLIP semantic space to explicitly model weather-specific degradations, introducing learnable weather prompts and a dynamic expert routing mechanism to jointly integrate degradation awareness, prompt-guided generation, and expert-adaptive scheduling. The architecture comprises CLIP-space prompt learning, conditional diffusion-based reconstruction, and a parallel multi-expert recovery framework. Contribution/Results: The model achieves state-of-the-art performance across multiple real-world weather datasets, consistently outperforming existing methods in both quantitative metrics (PSNR/SSIM) and perceptual quality.
📝 Abstract
Image restoration under adverse weather conditions is a critical task for many vision-based applications. Recent all-in-one frameworks that handle multiple weather degradations within a unified model have shown potential. However, the diversity of degradation patterns across different weather conditions, as well as the complex and varied nature of real-world degradations, pose significant challenges for multiple weather removal. To address these challenges, we propose an innovative diffusion paradigm with degradation-aware adaptive priors for all-in-one weather restoration, termed DA2Diff. It is a new exploration that applies CLIP to perceive degradation-aware properties for better multi-weather restoration. Specifically, we deploy a set of learnable prompts to capture degradation-aware representations by the prompt-image similarity constraints in the CLIP space. By aligning the snowy/hazy/rainy images with snow/haze/rain prompts, each prompt contributes to different weather degradation characteristics. The learned prompts are then integrated into the diffusion model via the designed weather specific prompt guidance module, making it possible to restore multiple weather types. To further improve the adaptiveness to complex weather degradations, we propose a dynamic expert selection modulator that employs a dynamic weather-aware router to flexibly dispatch varying numbers of restoration experts for each weather-distorted image, allowing the diffusion model to restore diverse degradations adaptively. Experimental results substantiate the favorable performance of DA2Diff over state-of-the-arts in quantitative and qualitative evaluation. Source code will be available after acceptance.