🤖 AI Summary
Real-world low-light images suffer from compound degradations—including local overexposure, noise, low brightness, and non-uniform illumination—posing significant challenges for unsupervised enhancement methods, which lack reference images and struggle to model complex physical degradation processes.
Method: We propose the first zero-reference, physically interpretable framework jointly addressing denoising and enhancement. It (i) introduces a subgraph-pairing self-supervised training strategy grounded in physical imaging models and Retinex theory; (ii) employs DCT-based frequency-domain decomposition and implicit degradation representation in sRGB space to decouple illumination, reflectance, and noise; and (iii) designs a retinal-inspired decomposition network.
Results: Our method achieves state-of-the-art performance on multiple real-world low-light benchmarks, significantly outperforming existing unsupervised and weakly supervised approaches. It demonstrates strong generalization, explicit physical interpretability, and is publicly available with open-source implementation.
📝 Abstract
Real-world low-light images often suffer from complex degradations such as local overexposure, low brightness, noise, and uneven illumination. Supervised methods tend to overfit to specific scenarios, while unsupervised methods, though better at generalization, struggle to model these degradations due to the lack of reference images. To address this issue, we propose an interpretable, zero-reference joint denoising and low-light enhancement framework tailored for real-world scenarios. Our method derives a training strategy based on paired sub-images with varying illumination and noise levels, grounded in physical imaging principles and retinex theory. Additionally, we leverage the Discrete Cosine Transform (DCT) to perform frequency domain decomposition in the sRGB space, and introduce an implicit-guided hybrid representation strategy that effectively separates intricate compounded degradations. In the backbone network design, we develop retinal decomposition network guided by implicit degradation representation mechanisms. Extensive experiments demonstrate the superiority of our method. Code will be available at https://github.com/huaqlili/unsupervised-light-enhance-ICLR2025.