SAIGFormer: A Spatially-Adaptive Illumination-Guided Network for Low-Light Image Enhancement

πŸ“… 2025-07-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing Transformer-based methods for low-light enhancement often suffer from overexposure or insufficient brightness recovery under non-uniform illumination (e.g., backlighting, shadows). To address this, we propose SAIGFormer, a spatially adaptive illumination-guided network. Its core innovations are: (1) the first integration of dynamic integral image modeling into Transformers to enable precise, pixel-wise estimation of spatially varying illumination; and (2) illumination-guided multi-head self-attention (IG-MSA), which leverages illumination priors to modulate attention weights and emphasize brightness-sensitive regions. We further introduce the SAIΒ²E module for end-to-end illumination-aware feature calibration. Extensive experiments on five standard low-light datasets and the cross-domain LOL-Blur benchmark demonstrate that SAIGFormer significantly outperforms state-of-the-art methods, particularly excelling in non-uniform scenarios with superior detail fidelity, natural luminance distribution, and cross-domain generalization capability.

Technology Category

Application Category

πŸ“ Abstract
Recent Transformer-based low-light enhancement methods have made promising progress in recovering global illumination. However, they still struggle with non-uniform lighting scenarios, such as backlit and shadow, appearing as over-exposure or inadequate brightness restoration. To address this challenge, we present a Spatially-Adaptive Illumination-Guided Transformer (SAIGFormer) framework that enables accurate illumination restoration. Specifically, we propose a dynamic integral image representation to model the spatially-varying illumination, and further construct a novel Spatially-Adaptive Integral Illumination Estimator ($ ext{SAI}^2 ext{E}$). Moreover, we introduce an Illumination-Guided Multi-head Self-Attention (IG-MSA) mechanism, which leverages the illumination to calibrate the lightness-relevant features toward visual-pleased illumination enhancement. Extensive experiments on five standard low-light datasets and a cross-domain benchmark (LOL-Blur) demonstrate that our SAIGFormer significantly outperforms state-of-the-art methods in both quantitative and qualitative metrics. In particular, our method achieves superior performance in non-uniform illumination enhancement while exhibiting strong generalization capabilities across multiple datasets. Code is available at https://github.com/LHTcode/SAIGFormer.git.
Problem

Research questions and friction points this paper is trying to address.

Enhancing low-light images with non-uniform lighting issues
Restoring accurate illumination in backlit and shadow scenarios
Improving brightness restoration without over-exposure artifacts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic integral image representation for illumination modeling
Spatially-Adaptive Integral Illumination Estimator (SAI2E)
Illumination-Guided Multi-head Self-Attention (IG-MSA) mechanism
πŸ”Ž Similar Papers
No similar papers found.
H
Hanting Li
Faculty of Data Science, City University of Macau, 999078, SAR Macao, China
Fei Zhou
Fei Zhou
HAUT
deep learningtarget detectionimage processing
X
Xin Sun
Faculty of Data Science, City University of Macau, 999078, SAR Macao, China
Y
Yang Hua
School of Electronics, Electrical Engineering and Computer Science, Queen’s University Belfast, BT7 1NN Belfast, U.K.
Jungong Han
Jungong Han
Chair Professor in Computer Vision, University of Sheffield, UK, FIAPR, FAAIA
Computer VisionVideo AnalyticsMachine Learning
Liang-Jie Zhang
Liang-Jie Zhang
Distinguished Professor@Shenzhen University (SZU), ACM DS & IEEE Fellow, ex-RSM@IBM & ex-CTO@Kingdee
Services ComputingAIBlockchain & IOTSOA & Cloud ComputingDigital Transformation