🤖 AI Summary
This work addresses the challenging problem of denoising low-light RAW video sequences. To tackle short-exposure constraints imposed by frame-rate limitations, we propose a lightweight and efficient method that jointly exploits temporal redundancy and sensor-specific noise modeling. Specifically, our approach performs multi-frame temporal modeling directly in the linear RAW domain, employs burst averaging to generate a high-SNR reference frame, and incorporates a signal-dependent noise prior calibrated to smartphone CMOS sensor characteristics. As a key contribution, we introduce the first large-scale low-light RAW video benchmark—comprising 756 ten-frame sequences captured across 14 mainstream mobile sensors—and define a comprehensive evaluation protocol based on the mean rank of PSNR and SSIM. Experiments demonstrate that our method significantly outperforms existing approaches on a private test set, achieving +1.82 dB PSNR and +0.032 SSIM improvements on the tenth frame, establishing a new paradigm for mobile low-light video imaging.
📝 Abstract
This paper reviews the AIM 2025 (Advances in Image Manipulation) Low-Light RAW Video Denoising Challenge. The task is to develop methods that denoise low-light RAW video by exploiting temporal redundancy while operating under exposure-time limits imposed by frame rate and adapting to sensor-specific, signal-dependent noise. We introduce a new benchmark of 756 ten-frame sequences captured with 14 smartphone camera sensors across nine conditions (illumination: 1/5/10 lx; exposure: 1/24, 1/60, 1/120 s), with high-SNR references obtained via burst averaging. Participants process linear RAW sequences and output the denoised 10th frame while preserving the Bayer pattern. Submissions are evaluated on a private test set using full-reference PSNR and SSIM, with final ranking given by the mean of per-metric ranks. This report describes the dataset, challenge protocol, and submitted approaches.