🤖 AI Summary
This work addresses the challenging coupled degradation of low resolution and insufficient illumination in low-light images by proposing a decoupled super-resolution approach that separates the task into illumination estimation and texture restoration. The key innovation lies in the Illumination-Guided Modulation (IGM) block, which employs a dual-stream architecture to predict spatially varying illumination maps and dynamically modulate texture features accordingly. This design enables simultaneous enhancement of dark regions and preservation of fine details in brighter areas. Extensive experiments demonstrate that the proposed method achieves state-of-the-art performance on both the OmniNormal5 and OmniNormal15 datasets, significantly outperforming existing approaches in both quantitative metrics and visual quality.
📝 Abstract
Low-light image super-resolution (LLSR) is a challenging task due to the coupled degradation of low resolution and poor illumination. To address this, we propose the Guided Texture and Feature Modulation Network (GTFMN), a novel framework that decouples the LLSR task into two sub-problems: illumination estimation and texture restoration. First, our network employs a dedicated Illumination Stream whose purpose is to predict a spatially varying illumination map that accurately captures lighting distribution. Further, this map is utilized as an explicit guide within our novel Illumination Guided Modulation Block (IGM Block) to dynamically modulate features in the Texture Stream. This mechanism achieves spatially adaptive restoration, enabling the network to intensify enhancement in poorly lit regions while preserving details in well-exposed areas. Extensive experiments demonstrate that GTFMN achieves the best performance among competing methods on the OmniNormal5 and OmniNormal15 datasets, outperforming them in both quantitative metrics and visual quality.