Learning to See in the Extremely Dark

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of paired training data and effective methods for enhancing extremely low-light RAW images (down to 0.0001 lux), this paper introduces SIED—the first large-scale paired RAW–sRGB dataset covering this ultra-low-illumination regime—and proposes a diffusion-based enhancement framework. The framework incorporates an Adaptive Illumination Correction Module (AICM) to precisely recover exposure, enforces color fidelity in the sRGB domain via a color consistency loss, and fully leverages the generative power and intrinsic denoising capability of diffusion models. Evaluated on SIED and multiple public benchmarks, our method significantly improves visual visibility, structural fidelity, and color accuracy, enabling perceptually usable reconstructions even under extreme low-SNR conditions. Both the code and dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Learning-based methods have made promising advances in low-light RAW image enhancement, while their capability to extremely dark scenes where the environmental illuminance drops as low as 0.0001 lux remains to be explored due to the lack of corresponding datasets. To this end, we propose a paired-to-paired data synthesis pipeline capable of generating well-calibrated extremely low-light RAW images at three precise illuminance ranges of 0.01-0.1 lux, 0.001-0.01 lux, and 0.0001-0.001 lux, together with high-quality sRGB references to comprise a large-scale paired dataset named See-in-the-Extremely-Dark (SIED) to benchmark low-light RAW image enhancement approaches. Furthermore, we propose a diffusion-based framework that leverages the generative ability and intrinsic denoising property of diffusion models to restore visually pleasing results from extremely low-SNR RAW inputs, in which an Adaptive Illumination Correction Module (AICM) and a color consistency loss are introduced to ensure accurate exposure correction and color restoration. Extensive experiments on the proposed SIED and publicly available benchmarks demonstrate the effectiveness of our method. The code and dataset are available at https://github.com/JianghaiSCU/SIED.
Problem

Research questions and friction points this paper is trying to address.

Enhancing extremely dark RAW images below 0.0001 lux illuminance
Generating calibrated low-light datasets for precise illuminance ranges
Restoring visually pleasing images from extremely low-SNR RAW inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates well-calibrated extremely low-light RAW images
Uses diffusion models for denoising and restoration
Introduces adaptive illumination correction module
🔎 Similar Papers
No similar papers found.
H
Hai Jiang
School of Aeronautics and Astronautics, Sichuan University
B
Binhao Guan
University of Electronic Science and Technology of China
Z
Zhen Liu
University of Electronic Science and Technology of China
X
Xiaohong Liu
Shanghai Jiao Tong University / National Innovation Center for UHD Video Technology
Jian Yu
Jian Yu
Auckland University of Technology
graph neural networksrecommender systemsdeep learningcomplex networksInternet computing
Z
Zheng Liu
Shanghai Jiao Tong University / National Innovation Center for UHD Video Technology
S
Songchen Han
School of Aeronautics and Astronautics, Sichuan University
Shuaicheng Liu
Shuaicheng Liu
University of Electronic Science and Technology of China
Computer VisionComputational Photography