LightQANet: Quantized and Adaptive Feature Learning for Low-Light Image Enhancement

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Low-light image enhancement (LLIE) suffers from unreliable feature representations, leading to blurred textures, color distortion, and artifacts. To address this, we propose an illumination-aware feature disentanglement framework. First, we design an Illumination Quantization Module (LQM) that discretizes continuous illumination variations into learnable illumination levels, enabling illumination-invariant feature extraction. Second, we introduce an Illumination-Aware Prompt Module (LAPM), which employs structured illumination factors to generate learnable prompts that dynamically guide feature optimization. Our approach preserves semantic consistency while significantly improving brightness restoration accuracy, texture fidelity, and color consistency. Extensive experiments demonstrate state-of-the-art performance on benchmark datasets including LOL and MIT-Adobe FiveK, achieving superior PSNR and SSIM scores as well as enhanced visual quality compared to existing methods.

Technology Category

Application Category

📝 Abstract
Low-light image enhancement (LLIE) aims to improve illumination while preserving high-quality color and texture. However, existing methods often fail to extract reliable feature representations due to severely degraded pixel-level information under low-light conditions, resulting in poor texture restoration, color inconsistency, and artifact. To address these challenges, we propose LightQANet, a novel framework that introduces quantized and adaptive feature learning for low-light enhancement, aiming to achieve consistent and robust image quality across diverse lighting conditions. From the static modeling perspective, we design a Light Quantization Module (LQM) to explicitly extract and quantify illumination-related factors from image features. By enforcing structured light factor learning, LQM enhances the extraction of light-invariant representations and mitigates feature inconsistency across varying illumination levels. From the dynamic adaptation perspective, we introduce a Light-Aware Prompt Module (LAPM), which encodes illumination priors into learnable prompts to dynamically guide the feature learning process. LAPM enables the model to flexibly adapt to complex and continuously changing lighting conditions, further improving image enhancement. Extensive experiments on multiple low-light datasets demonstrate that our method achieves state-of-the-art performance, delivering superior qualitative and quantitative results across various challenging lighting scenarios.
Problem

Research questions and friction points this paper is trying to address.

Extracting reliable features from severely degraded low-light images
Achieving consistent texture restoration and color fidelity
Adapting to diverse and continuously changing lighting conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantized illumination extraction via Light Quantization Module
Dynamic adaptation using Light-Aware Prompt Module
Learning light-invariant representations for robust enhancement
🔎 Similar Papers
No similar papers found.
X
Xu Wu
College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China and College of Computing and Data Science, Nanyang Technological University, Singapore
Zhihui Lai
Zhihui Lai
Shenzhen University
Xianxu Hou
Xianxu Hou
Xi'an Jiaotong-Liverpool University
Deep LearningComputer Vision
J
Jie Zhou
School of Mathematics and Statistics, Changsha University of Science and Technology, Changsha 410114, China, and also with the School of Artificial Intelligence, Shenzhen University, Shenzhen 518060, China
Y
Ya-nan Zhang
School of Computer Science, Sichuan Normal University, Chengdu 610065, China
Linlin Shen
Linlin Shen
Shenzhen University
Deep LearningComputer VisionFacial Analysis/RecognitionMedical Image Analysis