All-in-One Video Restoration under Smoothly Evolving Unknown Weather Degradations

📅 2026-01-02
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Real-world videos are often degraded by unknown weather conditions that evolve smoothly over time; however, existing restoration methods typically ignore the temporal continuity of such degradations, limiting their ability to achieve high-quality results. This work proposes a unified video restoration framework tailored for smoothly evolving unknown degradations (SEUD), which, for the first time, explicitly models the temporal smoothness of both degradation type and intensity. The proposed ORCANet employs an adaptive mechanism combining static and dynamic prompts, initialized by a Coarse Intensity Estimation Dehazing module and enhanced by a Flow Prompt Generation module that extracts degradation features and produces discriminative prompts under label-aware supervision. Extensive experiments demonstrate that the method significantly outperforms current image and video restoration approaches across diverse degradation scenarios, achieving state-of-the-art performance in restoration quality, temporal consistency, and robustness.

Technology Category

Application Category

📝 Abstract
All-in-one image restoration aims to recover clean images from diverse unknown degradations using a single model. But extending this task to videos faces unique challenges. Existing approaches primarily focus on frame-wise degradation variation, overlooking the temporal continuity that naturally exists in real-world degradation processes. In practice, degradation types and intensities evolve smoothly over time, and multiple degradations may coexist or transition gradually. In this paper, we introduce the Smoothly Evolving Unknown Degradations (SEUD) scenario, where both the active degradation set and degradation intensity change continuously over time. To support this scenario, we design a flexible synthesis pipeline that generates temporally coherent videos with single, compound, and evolving degradations. To address the challenges in the SEUD scenario, we propose an all-in-One Recurrent Conditional and Adaptive prompting Network (ORCANet). First, a Coarse Intensity Estimation Dehazing (CIED) module estimates haze intensity using physical priors and provides coarse dehazed features as initialization. Second, a Flow Prompt Generation (FPG) module extracts degradation features. FPG generates both static prompts that capture segment-level degradation types and dynamic prompts that adapt to frame-level intensity variations. Furthermore, a label-aware supervision mechanism improves the discriminability of static prompt representations under different degradations. Extensive experiments show that ORCANet achieves superior restoration quality, temporal consistency, and robustness over image and video-based baselines. Code is available at https://github.com/Friskknight/ORCANet-SEUD.
Problem

Research questions and friction points this paper is trying to address.

video restoration
temporal continuity
unknown degradations
smoothly evolving degradations
all-in-one restoration
Innovation

Methods, ideas, or system contributions that make the work stand out.

video restoration
smoothly evolving degradation
recurrent adaptive prompting
temporal consistency
all-in-one model
🔎 Similar Papers
No similar papers found.
Wenrui Li
Wenrui Li
Assistant Professor, University of Connecticut
StatisticsNetwork scienceBiostatistics
H
Hongtao Chen
Department of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
Y
Yao Xiao
Department of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
Wangmeng Zuo
Wangmeng Zuo
School of Computer Science and Technology, Harbin Institute of Technology
Computer VisionImage ProcessingGenerative AIDeep LearningBiometrics
Jiantao Zhou
Jiantao Zhou
Professor, Department of Computer and Information Science, University of Macau
Information Forensics and SecurityMultimedia Signal ProcessingMachine Learning
Y
Yonghong Tian
School of AI for Science, the Shenzhen Graduate School, Peking University, Shenzhen, China, the Peng Cheng Laboratory, Shenzhen, China, and also with the School of Computer Science, Peking University, Beijing, China
Xiaopeng Fan
Xiaopeng Fan
Professor, Harbin Institute of Technology
Video/ImageWireless