🤖 AI Summary
Real-world videos are often degraded by unknown weather conditions that evolve smoothly over time; however, existing restoration methods typically ignore the temporal continuity of such degradations, limiting their ability to achieve high-quality results. This work proposes a unified video restoration framework tailored for smoothly evolving unknown degradations (SEUD), which, for the first time, explicitly models the temporal smoothness of both degradation type and intensity. The proposed ORCANet employs an adaptive mechanism combining static and dynamic prompts, initialized by a Coarse Intensity Estimation Dehazing module and enhanced by a Flow Prompt Generation module that extracts degradation features and produces discriminative prompts under label-aware supervision. Extensive experiments demonstrate that the method significantly outperforms current image and video restoration approaches across diverse degradation scenarios, achieving state-of-the-art performance in restoration quality, temporal consistency, and robustness.
📝 Abstract
All-in-one image restoration aims to recover clean images from diverse unknown degradations using a single model. But extending this task to videos faces unique challenges. Existing approaches primarily focus on frame-wise degradation variation, overlooking the temporal continuity that naturally exists in real-world degradation processes. In practice, degradation types and intensities evolve smoothly over time, and multiple degradations may coexist or transition gradually. In this paper, we introduce the Smoothly Evolving Unknown Degradations (SEUD) scenario, where both the active degradation set and degradation intensity change continuously over time. To support this scenario, we design a flexible synthesis pipeline that generates temporally coherent videos with single, compound, and evolving degradations. To address the challenges in the SEUD scenario, we propose an all-in-One Recurrent Conditional and Adaptive prompting Network (ORCANet). First, a Coarse Intensity Estimation Dehazing (CIED) module estimates haze intensity using physical priors and provides coarse dehazed features as initialization. Second, a Flow Prompt Generation (FPG) module extracts degradation features. FPG generates both static prompts that capture segment-level degradation types and dynamic prompts that adapt to frame-level intensity variations. Furthermore, a label-aware supervision mechanism improves the discriminability of static prompt representations under different degradations. Extensive experiments show that ORCANet achieves superior restoration quality, temporal consistency, and robustness over image and video-based baselines. Code is available at https://github.com/Friskknight/ORCANet-SEUD.