Image Super-resolution Reconstruction Network based on Enhanced Swin Transformer via Alternating Aggregation of Local-Global Features

📅 2023-12-30
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Swin Transformer-based image super-resolution methods suffer from insufficient local feature modeling and inadequate channel-spatial joint interaction. To address these limitations, this paper proposes a Local-Global Alternating Aggregation Enhancement (LGAEE) architecture. Our key contributions are: (1) a novel local-global alternating aggregation mechanism that jointly captures fine-grained local structures and long-range dependencies; (2) shifted convolutions explicitly modeling local spatial-channel coupling; and (3) a block-sparse global perception module combined with a low-parameter residual channel attention module to enhance nonlinear representation capability. Extensive experiments on five standard benchmark datasets demonstrate that our method consistently outperforms state-of-the-art super-resolution models, achieving significant PSNR and SSIM improvements. Moreover, it maintains superior parameter efficiency and computational cost-effectiveness, striking an optimal balance between performance and resource consumption.

Technology Category

Application Category

📝 Abstract
The Swin Transformer image super-resolution reconstruction network only relies on the long-range relationship of window attention and shifted window attention to explore features. This mechanism has two limitations. On the one hand, it only focuses on global features while ignoring local features. On the other hand, it is only concerned with spatial feature interactions while ignoring channel features and channel interactions, thus limiting its non-linear mapping ability. To address the above limitations, this paper proposes enhanced Swin Transformer modules via alternating aggregation of local-global features. In the local feature aggregation stage, we introduce a shift convolution to realize the interaction between local spatial information and channel information. Then, a block sparse global perception module is introduced in the global feature aggregation stage. In this module, we reorganize the spatial information first, then send the recombination information into a dense layer to implement the global perception. After that, a multi-scale self-attention module and a low-parameter residual channel attention module are introduced to realize information aggregation at different scales. Finally, the proposed network is validated on five publicly available datasets. The experimental results show that the proposed network outperforms the other state-of-the-art super-resolution networks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing Swin Transformer for better image super-resolution
Addressing local feature neglect in global attention mechanisms
Improving spatial-channel interactions for superior nonlinear mapping
Innovation

Methods, ideas, or system contributions that make the work stand out.

Alternately aggregates local and global features
Uses shift convolution for spatial-channel interactions
Introduces multiscale self-attention and residual attention
🔎 Similar Papers
No similar papers found.
Y
Yuming Huang
Minnan Normal University, School of Physics and Engineering, Zhangzhou, China, 363000
Y
Yingpin Chen
Minnan Normal University, School of Physics and Engineering, Zhangzhou, China, 363000
C
Changhui Wu
Minnan Normal University, School of Physics and Engineering, Zhangzhou, China, 363000
H
Hanrong Xie
Minnan Normal University, School of Physics and Engineering, Zhangzhou, China, 363000
B
Binhui Song
Minnan Normal University, School of Physics and Engineering, Zhangzhou, China, 363000
H
Hui Wang
Minnan Normal University, School of Physics and Engineering, Zhangzhou, China, 363000