HVI-CIDNet+: Beyond Extreme Darkness for Low-Light Image Enhancement

๐Ÿ“… 2025-07-09
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing low-light image enhancement (LLIE) methods suffer from color distortion and brightness artifacts in sRGB space, while HSV-based approaches introduce severe red/black noise. To address this, we propose HVIโ€”a novel color space that achieves physically grounded decoupling of luminance (I) from chrominance (H, V) for the first time. Based on HVI, we design HVI-CIDNet+, featuring: (i) a learnable HVI channel suppression module to eliminate red/black noise; (ii) a vision-language-model-driven Prior-guided Attention Block (PAB) to enhance semantic awareness and content recovery in extremely dark regions; and (iii) a cross-attention fusion mechanism with region-wise refinement. Evaluated across 10 benchmark datasets, HVI-CIDNet+ consistently outperforms state-of-the-art methods, delivering significant improvements in detail fidelity, color accuracy, and luminance consistency.

Technology Category

Application Category

๐Ÿ“ Abstract
Low-Light Image Enhancement (LLIE) aims to restore vivid content and details from corrupted low-light images. However, existing standard RGB (sRGB) color space-based LLIE methods often produce color bias and brightness artifacts due to the inherent high color sensitivity. While Hue, Saturation, and Value (HSV) color space can decouple brightness and color, it introduces significant red and black noise artifacts. To address this problem, we propose a new color space for LLIE, namely Horizontal/Vertical-Intensity (HVI), defined by the HV color map and learnable intensity. The HV color map enforces small distances for the red coordinates to remove red noise artifacts, while the learnable intensity compresses the low-light regions to remove black noise artifacts. Additionally, we introduce the Color and Intensity Decoupling Network+ (HVI-CIDNet+), built upon the HVI color space, to restore damaged content and mitigate color distortion in extremely dark regions. Specifically, HVI-CIDNet+ leverages abundant contextual and degraded knowledge extracted from low-light images using pre-trained vision-language models, integrated via a novel Prior-guided Attention Block (PAB). Within the PAB, latent semantic priors can promote content restoration, while degraded representations guide precise color correction, both particularly in extremely dark regions through the meticulously designed cross-attention fusion mechanism. Furthermore, we construct a Region Refinement Block that employs convolution for information-rich regions and self-attention for information-scarce regions, ensuring accurate brightness adjustments. Comprehensive results from benchmark experiments demonstrate that the proposed HVI-CIDNet+ outperforms the state-of-the-art methods on 10 datasets.
Problem

Research questions and friction points this paper is trying to address.

Address color bias and brightness artifacts in low-light images
Reduce red and black noise artifacts in HSV color space
Enhance content restoration in extremely dark regions
Innovation

Methods, ideas, or system contributions that make the work stand out.

New HVI color space reduces noise artifacts
Prior-guided Attention Block integrates contextual knowledge
Region Refinement Block optimizes brightness adjustments
๐Ÿ”Ž Similar Papers
No similar papers found.
Qingsen Yan
Qingsen Yan
Northwestern Polytechnical University
Image processingImage fusionContinual learning
K
Kangbiao Shi
School of Computer Science, Northwestern Polytechnical University, Xiโ€™an, China
Yixu Feng
Yixu Feng
Northwestern Polytechnical University
Artificial IntelligenceComputer VisionLow-level Vision
T
Tao Hu
School of Computer Science, Northwestern Polytechnical University, Xiโ€™an, China
P
Peng Wu
School of Computer Science, Northwestern Polytechnical University, Xiโ€™an, China
Guansong Pang
Guansong Pang
Assistant Professor of Computer Science, Singapore Management University
Machine LearningData MiningComputer VisionAnomaly DetectionOpen-world Learning
Yanning Zhang
Yanning Zhang
Northwestern Polytechnical University
Computer Vision