๐ค AI Summary
Existing video deraining methods rely heavily on paired synthetic data, resulting in poor generalization to real-world rainy scenes. To address this, we propose a dual-branch spatiotemporal state-space model that jointly performs spatial feature extraction and inter-frame temporal modeling, augmented by dynamic stacked filters for pixel-wise adaptive feature optimization. We further introduce a semi-supervised median-stacking loss and a sparsity-prior-driven pseudo-label generation strategy. Moreover, we construct RainTrackโthe first real-world rainy-video benchmark explicitly designed for object detection and tracking. Our method eliminates dependence on synthetic training data and achieves state-of-the-art performance (superior PSNR/SSIM) on both multi-source synthetic and real-world videos, with efficient inference. Crucially, it significantly enhances the robustness of downstream detection and tracking tasks under rainy conditions.
๐ Abstract
Significant progress has been made in video restoration under rainy conditions over the past decade, largely propelled by advancements in deep learning. Nevertheless, existing methods that depend on paired data struggle to generalize effectively to real-world scenarios, primarily due to the disparity between synthetic and authentic rain effects. To address these limitations, we propose a dual-branch spatio-temporal state-space model to enhance rain streak removal in video sequences. Specifically, we design spatial and temporal state-space model layers to extract spatial features and incorporate temporal dependencies across frames, respectively. To improve multi-frame feature fusion, we derive a dynamic stacking filter, which adaptively approximates statistical filters for superior pixel-wise feature refinement. Moreover, we develop a median stacking loss to enable semi-supervised learning by generating pseudo-clean patches based on the sparsity prior of rain. To further explore the capacity of deraining models in supporting other vision-based tasks in rainy environments, we introduce a novel real-world benchmark focused on object detection and tracking in rainy conditions. Our method is extensively evaluated across multiple benchmarks containing numerous synthetic and real-world rainy videos, consistently demonstrating its superiority in quantitative metrics, visual quality, efficiency, and its utility for downstream tasks.