🤖 AI Summary
Low-light image enhancement (LLIE) suffers from unreliable feature representations, leading to blurred textures, color distortion, and artifacts. To address this, we propose an illumination-aware feature disentanglement framework. First, we design an Illumination Quantization Module (LQM) that discretizes continuous illumination variations into learnable illumination levels, enabling illumination-invariant feature extraction. Second, we introduce an Illumination-Aware Prompt Module (LAPM), which employs structured illumination factors to generate learnable prompts that dynamically guide feature optimization. Our approach preserves semantic consistency while significantly improving brightness restoration accuracy, texture fidelity, and color consistency. Extensive experiments demonstrate state-of-the-art performance on benchmark datasets including LOL and MIT-Adobe FiveK, achieving superior PSNR and SSIM scores as well as enhanced visual quality compared to existing methods.
📝 Abstract
Low-light image enhancement (LLIE) aims to improve illumination while preserving high-quality color and texture. However, existing methods often fail to extract reliable feature representations due to severely degraded pixel-level information under low-light conditions, resulting in poor texture restoration, color inconsistency, and artifact. To address these challenges, we propose LightQANet, a novel framework that introduces quantized and adaptive feature learning for low-light enhancement, aiming to achieve consistent and robust image quality across diverse lighting conditions. From the static modeling perspective, we design a Light Quantization Module (LQM) to explicitly extract and quantify illumination-related factors from image features. By enforcing structured light factor learning, LQM enhances the extraction of light-invariant representations and mitigates feature inconsistency across varying illumination levels. From the dynamic adaptation perspective, we introduce a Light-Aware Prompt Module (LAPM), which encodes illumination priors into learnable prompts to dynamically guide the feature learning process. LAPM enables the model to flexibly adapt to complex and continuously changing lighting conditions, further improving image enhancement. Extensive experiments on multiple low-light datasets demonstrate that our method achieves state-of-the-art performance, delivering superior qualitative and quantitative results across various challenging lighting scenarios.