π€ AI Summary
This work addresses two critical limitations in low-light image enhancement (LLIE): color distortion in RGB space and red/black noise artifacts introduced by HSV-based methods. To overcome these, we propose a novel HVI color space that decouples polarized hue-saturation (HS) representation from a learnable intensity channel, enabling independent modeling of chromaticity and luminance. Based on HVI, we design the Color-Intensity Decoupling Network (CIDNet), which performs illumination-adaptive photometric mapping. The polar coordinate transformation in HVI suppresses red artifacts, while the learnable intensity channel compresses dark regions to eliminate black noise. Extensive experiments demonstrate that our method achieves state-of-the-art performance across ten mainstream benchmarks, significantly improving detail recovery while effectively mitigating color bias and noise-related artifacts. The source code is publicly available.
π Abstract
Low-Light Image Enhancement (LLIE) is a crucial computer vision task that aims to restore detailed visual information from corrupted low-light images. Many existing LLIE methods are based on standard RGB (sRGB) space, which often produce color bias and brightness artifacts due to inherent high color sensitivity in sRGB. While converting the images using Hue, Saturation and Value (HSV) color space helps resolve the brightness issue, it introduces significant red and black noise artifacts. To address this issue, we propose a new color space for LLIE, namely Horizontal/Vertical-Intensity (HVI), defined by polarized HS maps and learnable intensity. The former enforces small distances for red coordinates to remove the red artifacts, while the latter compresses the low-light regions to remove the black artifacts. To fully leverage the chromatic and intensity information, a novel Color and Intensity Decoupling Network (CIDNet) is further introduced to learn accurate photometric mapping function under different lighting conditions in the HVI space. Comprehensive results from benchmark and ablation experiments show that the proposed HVI color space with CIDNet outperforms the state-of-the-art methods on 10 datasets. The code is available at https://github.com/Fediory/HVI-CIDNet.