🤖 AI Summary
To address the scarcity of high-quality paired datasets and efficient models for raw-domain video denoising in realistic dynamic scenes, this work introduces ReCRVD—the first large-scale, high-ISO (1600–25600) noisy-clean paired video dataset comprising 120 sequences—featuring a novel screen-recapture protocol to preserve motion authenticity. We propose RViDeformer, a lightweight Transformer architecture integrating multi-scale windowed attention (local, downsampled, and inter-frame), multi-branch spatiotemporal modeling, and structural reparameterization, achieving long-range dependency capture with significantly reduced computational cost. Our training strategy combines raw-domain noise modeling, video registration, and joint supervised-unsupervised learning. Extensive experiments on real-world outdoor noisy videos demonstrate state-of-the-art performance, surpassing the CRVD benchmark. Both code and dataset are publicly released, substantially enhancing practical applicability and generalization capability.
📝 Abstract
In recent years, raw video denoising has garnered increased attention due to the consistency with the imaging process and well-studied noise modeling in the raw domain. However, two problems still hinder the denoising performance. Firstly, there is no large dataset with realistic motions for supervised raw video denoising, as capturing noisy and clean frames for real dynamic scenes is difficult. To address this, we propose recapturing existing high-resolution videos displayed on a 4K screen with high-low ISO settings to construct noisy-clean paired frames. In this way, we construct a video denoising dataset (named as ReCRVD) with 120 groups of noisy-clean videos, whose ISO values ranging from 1600 to 25600. Secondly, while non-local temporal-spatial attention is beneficial for denoising, it often leads to heavy computation costs. We propose an efficient raw video denoising transformer network (RViDeformer) that explores both short and long-distance correlations. Specifically, we propose multi-branch spatial and temporal attention modules, which explore the patch correlations from local window, local low-resolution window, global downsampled window, and neighbor-involved window, and then they are fused together. We employ reparameterization to reduce computation costs. Our network is trained in both supervised and unsupervised manners, achieving the best performance compared with state-of-the-art methods. Additionally, the model trained with our proposed dataset (ReCRVD) outperforms the model trained with previous benchmark dataset (CRVD) when evaluated on the real-world outdoor noisy videos. Our code and dataset are available at https://github.com/cao-cong/RViDeformer.