🤖 AI Summary
To address the lack of paired training data and effective methods for enhancing extremely low-light RAW images (down to 0.0001 lux), this paper introduces SIED—the first large-scale paired RAW–sRGB dataset covering this ultra-low-illumination regime—and proposes a diffusion-based enhancement framework. The framework incorporates an Adaptive Illumination Correction Module (AICM) to precisely recover exposure, enforces color fidelity in the sRGB domain via a color consistency loss, and fully leverages the generative power and intrinsic denoising capability of diffusion models. Evaluated on SIED and multiple public benchmarks, our method significantly improves visual visibility, structural fidelity, and color accuracy, enabling perceptually usable reconstructions even under extreme low-SNR conditions. Both the code and dataset are publicly released.
📝 Abstract
Learning-based methods have made promising advances in low-light RAW image enhancement, while their capability to extremely dark scenes where the environmental illuminance drops as low as 0.0001 lux remains to be explored due to the lack of corresponding datasets. To this end, we propose a paired-to-paired data synthesis pipeline capable of generating well-calibrated extremely low-light RAW images at three precise illuminance ranges of 0.01-0.1 lux, 0.001-0.01 lux, and 0.0001-0.001 lux, together with high-quality sRGB references to comprise a large-scale paired dataset named See-in-the-Extremely-Dark (SIED) to benchmark low-light RAW image enhancement approaches. Furthermore, we propose a diffusion-based framework that leverages the generative ability and intrinsic denoising property of diffusion models to restore visually pleasing results from extremely low-SNR RAW inputs, in which an Adaptive Illumination Correction Module (AICM) and a color consistency loss are introduced to ensure accurate exposure correction and color restoration. Extensive experiments on the proposed SIED and publicly available benchmarks demonstrate the effectiveness of our method. The code and dataset are available at https://github.com/JianghaiSCU/SIED.