🤖 AI Summary
Retinex-based low-light image enhancement methods often amplify noise and introduce local color distortions. To address these issues, this paper proposes a dual-path error compensation framework: one path performs pixel-wise error estimation to correct luminance and chrominance deviations, while the other independently conducts structure-aware denoising. We introduce the HIS-Retinex loss function, which enforces luminance distribution consistency with real-world illumination priors. Moreover, we are the first to integrate the VMamba visual state-space model into the low-light enhancement backbone, achieving both effective long-range dependency modeling and computational efficiency. Extensive experiments on multiple benchmark datasets demonstrate that our method significantly outperforms state-of-the-art approaches in PSNR and SSIM, yielding more natural visual quality, superior texture fidelity, and robust noise suppression. The source code is publicly available.
📝 Abstract
For the task of low-light image enhancement, deep learning-based algorithms have demonstrated superiority and effectiveness compared to traditional methods. However, these methods, primarily based on Retinex theory, tend to overlook the noise and color distortions in input images, leading to significant noise amplification and local color distortions in enhanced results. To address these issues, we propose the Dual-Path Error Compensation (DPEC) method, designed to improve image quality under low-light conditions by preserving local texture details while restoring global image brightness without amplifying noise. DPEC incorporates precise pixel-level error estimation to capture subtle differences and an independent denoising mechanism to prevent noise amplification. We introduce the HIS-Retinex loss to guide DPEC's training, ensuring the brightness distribution of enhanced images closely aligns with real-world conditions. To balance computational speed and resource efficiency while training DPEC for a comprehensive understanding of the global context, we integrated the VMamba architecture into its backbone. Comprehensive quantitative and qualitative experimental results demonstrate that our algorithm significantly outperforms state-of-the-art methods in low-light image enhancement. The code is publicly available online at https://github.com/wangshuang233/DPEC.