🤖 AI Summary
Swin Transformer-based image super-resolution methods suffer from insufficient local feature modeling and inadequate channel-spatial joint interaction. To address these limitations, this paper proposes a Local-Global Alternating Aggregation Enhancement (LGAEE) architecture. Our key contributions are: (1) a novel local-global alternating aggregation mechanism that jointly captures fine-grained local structures and long-range dependencies; (2) shifted convolutions explicitly modeling local spatial-channel coupling; and (3) a block-sparse global perception module combined with a low-parameter residual channel attention module to enhance nonlinear representation capability. Extensive experiments on five standard benchmark datasets demonstrate that our method consistently outperforms state-of-the-art super-resolution models, achieving significant PSNR and SSIM improvements. Moreover, it maintains superior parameter efficiency and computational cost-effectiveness, striking an optimal balance between performance and resource consumption.
📝 Abstract
The Swin Transformer image super-resolution reconstruction network only relies on the long-range relationship of window attention and shifted window attention to explore features. This mechanism has two limitations. On the one hand, it only focuses on global features while ignoring local features. On the other hand, it is only concerned with spatial feature interactions while ignoring channel features and channel interactions, thus limiting its non-linear mapping ability. To address the above limitations, this paper proposes enhanced Swin Transformer modules via alternating aggregation of local-global features. In the local feature aggregation stage, we introduce a shift convolution to realize the interaction between local spatial information and channel information. Then, a block sparse global perception module is introduced in the global feature aggregation stage. In this module, we reorganize the spatial information first, then send the recombination information into a dense layer to implement the global perception. After that, a multi-scale self-attention module and a low-parameter residual channel attention module are introduced to realize information aggregation at different scales. Finally, the proposed network is validated on five publicly available datasets. The experimental results show that the proposed network outperforms the other state-of-the-art super-resolution networks.