DST-Net: A Dual-Stream Transformer with Illumination-Independent Feature Guidance and Multi-Scale Spatial Convolution for Low-Light Image Enhancement

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Low-light images often suffer from detail loss due to luminance attenuation and structural distortion, and existing methods struggle to simultaneously achieve global enhancement and fine-detail recovery. This work proposes a dual-stream Transformer network that decouples illumination-invariant feature priors to guide the enhancement process and introduces a multi-scale spatial fusion module to jointly restore high-frequency edges and cross-channel spatial correlations. The method leverages Difference-of-Gaussians (DoG), LAB color space, and VGG-16 features to extract texture priors, and incorporates cross-modal attention along with pseudo-3D/3D gradient convolutions. Evaluated on the LOL dataset, the approach achieves a PSNR of 25.64 dB, outperforming state-of-the-art methods in both perceptual quality and quantitative metrics, while also demonstrating strong cross-scene generalization on the LSRW dataset.

Technology Category

Application Category

📝 Abstract
Low-light image enhancement aims to restore the visibility of images captured by visual sensors in dim environments by addressing their inherent signal degradations, such as luminance attenuation and structural corruption. Although numerous algorithms attempt to improve image quality, existing methods often cause a severe loss of intrinsic signal priors. To overcome these challenges, we propose a Dual-Stream Transformer Network (DST-Net) based on illumination-agnostic signal prior guidance and multi-scale spatial convolutions. First, to address the loss of critical signal features under low-light conditions, we design a feature extraction module. This module integrates Difference of Gaussians (DoG), LAB color space transformations, and VGG-16 for texture extraction, utilizing decoupled illumination-agnostic features as signal priors to continuously guide the enhancement process. Second, we construct a dual-stream interaction architecture. By employing a cross-modal attention mechanism, the network leverages the extracted priors to dynamically rectify the deteriorated signal representation of the enhanced image, ultimately achieving iterative enhancement through differentiable curve estimation. Furthermore, to overcome the inability of existing methods to preserve fine structures and textures, we propose a Multi-Scale Spatial Fusion Block (MSFB) featuring pseudo-3D and 3D gradient operator convolutions. This module integrates explicit gradient operators to recover high-frequency edges while capturing inter-channel spatial correlations via multi-scale spatial convolutions. Extensive evaluations and ablation studies demonstrate that DST-Net achieves superior performance in subjective visual quality and objective metrics. Specifically, our method achieves a PSNR of 25.64 dB on the LOL dataset. Subsequent validation on the LSRW dataset further confirms its robust cross-scene generalization.
Problem

Research questions and friction points this paper is trying to address.

low-light image enhancement
signal prior loss
structural corruption
luminance attenuation
texture preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Illumination-independent feature guidance
Dual-stream Transformer
Multi-scale spatial convolution
Cross-modal attention
Signal prior
🔎 Similar Papers
No similar papers found.
Y
Yicui Shi
College of Mechanical and Vehicle Engineering, Chongqing University, Chongqing, 400044, China
Y
Yuhan Chen
College of Mechanical and Vehicle Engineering, Chongqing University, Chongqing, 400044, China
X
Xiangfei Huang
College of Mechanical and Vehicle Engineering, Chongqing University, Chongqing, 400044, China
Z
Zhenguo Wang
Shanghai Zhenhua Heavy Industries Co., Ltd., Shanghai, 200125, China
W
Wenxuan Yu
College of Mechanical and Vehicle Engineering, Chongqing University, Chongqing, 400044, China
Ying Fang
Ying Fang
Westlake University; Zhejiang University
speech recognition