🤖 AI Summary
Existing low-light image enhancement methods heavily rely on paired training data and often cause over-enhancement, degrading normally lit regions. To address these limitations, we propose a fully zero-shot (i.e., training-data-free) enhancement framework that requires neither real image pairs nor domain-specific priors. Our method comprises two core modules: illumination-reflectance decomposition and illumination-guided pixel-wise adaptive correction. Crucially, we introduce an iterative denoising mechanism based on downsampled noise pairs to stabilize enhancement without supervision. A densely connected network ensures robust decomposition, while the estimated illumination map dynamically modulates correction intensity to jointly preserve structural fidelity and suppress noise. Extensive experiments on four public benchmarks demonstrate state-of-the-art performance among 14 unsupervised methods, achieving 20.41 dB PSNR and 0.860 SSIM—significantly improving visual quality and structural integrity in complex low-light scenarios.
📝 Abstract
Current methods for restoring underexposed images typically rely on supervised learning with paired underexposed and well-illuminated images. However, collecting such datasets is often impractical in real-world scenarios. Moreover, these methods can lead to over-enhancement, distorting well-illuminated regions. To address these issues, we propose IGDNet, a Zero-Shot enhancement method that operates solely on a single test image, without requiring guiding priors or training data. IGDNet exhibits strong generalization ability and effectively suppresses noise while restoring illumination. The framework comprises a decomposition module and a denoising module. The former separates the image into illumination and reflection components via a dense connection network, while the latter enhances non-uniformly illuminated regions using an illumination-guided pixel adaptive correction method. A noise pair is generated through downsampling and refined iteratively to produce the final result. Extensive experiments on four public datasets demonstrate that IGDNet significantly improves visual quality under complex lighting conditions. Quantitative results on metrics like PSNR (20.41dB) and SSIM (0.860dB) show that it outperforms 14 state-of-the-art unsupervised methods. The code will be released soon.