Rank-Factorized Implicit Neural Bias: Scaling Super-Resolution Transformer with FlashAttention

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the incompatibility of existing super-resolution Transformers with efficient attention mechanisms like FlashAttention, which stems from their reliance on relative positional bias (RPB) that cannot be expressed in a dot-product form. To overcome this limitation, the authors propose Rank-decomposed Implicit Bias (RIB), the first approach to model positional bias using low-rank implicit neural representations. By reformulating bias addition as a dot-product operation, RIB enables seamless integration with FlashAttention, significantly enhancing model scalability. Combined with convolution-based local attention and a cyclic window mechanism, RIB supports larger window sizes and training patch dimensions. The method achieves a PSNR of 35.63 dB on Urban100×2 and delivers 2.1× and 2.9× speedups in training and inference, respectively, compared to RPB-based PFT.

Technology Category

Application Category

📝 Abstract
Recent Super-Resolution~(SR) methods mainly adopt Transformers for their strong long-range modeling capability and exceptional representational capacity. However, most SR Transformers rely heavily on relative positional bias~(RPB), which prevents them from leveraging hardware-efficient attention kernels such as FlashAttention. This limitation imposes a prohibitive computational burden during both training and inference, severely restricting attempts to scale SR Transformers by enlarging the training patch size or the self-attention window. Consequently, unlike other domains that actively exploit the inherent scalability of Transformers, SR Transformers remain heavily focused on effectively utilizing limited receptive fields. In this paper, we propose Rank-factorized Implicit Neural Bias~(RIB), an alternative to RPB that enables FlashAttention in SR Transformers. Specifically, RIB approximates positional bias using low-rank implicit neural representations and concatenates them with pixel content tokens in a channel-wise manner, turning the element-wise bias addition in attention score computation into a dot-product operation. Further, we introduce a convolutional local attention and a cyclic window strategy to fully leverage the advantages of long-range interactions enabled by RIB and FlashAttention. We enlarge the window size up to \textbf{96$\times$96} while jointly scaling the training patch size and the dataset size, maximizing the benefits of Transformers in the SR task. As a result, our network achieves \textbf{35.63\,dB PSNR} on Urban100$\times$2, while reducing training and inference time by \textbf{2.1$\times$} and \textbf{2.9$\times$}, respectively, compared to the RPB-based SR Transformer~(PFT).
Problem

Research questions and friction points this paper is trying to address.

Super-Resolution
Transformer
Relative Positional Bias
FlashAttention
Scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rank-factorized Implicit Neural Bias
FlashAttention
Super-Resolution Transformer
low-rank implicit representation
scalable attention
🔎 Similar Papers
No similar papers found.
D
Dongheon Lee
Machine Intelligence Laboratory, University of Seoul, Korea
Seokju Yun
Seokju Yun
University of Seoul
representation learningmulti-modal learning3D/4D generation
J
Jaegyun Im
Machine Intelligence Laboratory, University of Seoul, Korea
Youngmin Ro
Youngmin Ro
Assistant Professor, University of Seoul
deep learningcomputer vision