From Local Windows to Adaptive Candidates via Individualized Exploratory: Rethinking Attention for Image Super-Resolution

📅 2026-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Transformer-based image super-resolution methods typically employ fixed grouped attention mechanisms, which overlook the asymmetric nature of similarity among tokens and thereby limit modeling flexibility and efficiency. To address this limitation, this work proposes the Individualized Exploration Transformer (IET), which introduces an Individualized Exploration Attention (IEA) mechanism. IEA enables each token to adaptively select content-aware and independent attention candidates, facilitating precise and efficient information aggregation. By moving beyond the constraints of conventional fixed-window or grouped attention schemes, the proposed method achieves state-of-the-art performance on standard super-resolution benchmarks, significantly outperforming existing approaches at comparable computational complexity.

Technology Category

Application Category

📝 Abstract
Single Image Super-Resolution (SISR) is a fundamental computer vision task that aims to reconstruct a high-resolution (HR) image from a low-resolution (LR) input. Transformer-based methods have achieved remarkable performance by modeling long-range dependencies in degraded images. However, their feature-intensive attention computation incurs high computational cost. To improve efficiency, most existing approaches partition images into fixed groups and restrict attention within each group. Such group-wise attention overlooks the inherent asymmetry in token similarities, thereby failing to enable flexible and token-adaptive attention computation. To address this limitation, we propose the Individualized Exploratory Transformer (IET), which introduces a novel Individualized Exploratory Attention (IEA) mechanism that allows each token to adaptively select its own content-aware and independent attention candidates. This token-adaptive and asymmetric design enables more precise information aggregation while maintaining computational efficiency. Extensive experiments on standard SR benchmarks demonstrate that IET achieves state-of-the-art performance under comparable computational complexity.
Problem

Research questions and friction points this paper is trying to address.

Image Super-Resolution
Attention Mechanism
Token Asymmetry
Computational Efficiency
Transformer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Individualized Exploratory Attention
Token-adaptive Attention
Asymmetric Attention
Image Super-Resolution
Efficient Transformer
🔎 Similar Papers
No similar papers found.