SoLA-Vision: Fine-grained Layer-wise Linear Softmax Hybrid Attention

šŸ“… 2026-01-16
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
This work addresses the high computational complexity of standard Softmax self-attention in vision tasks, which scales quadratically with sequence length (O(N²)), and the limited modeling capacity of linear attention despite its linear complexity (O(N)). To overcome these limitations, the authors propose SoLA-Vision, a fine-grained inter-layer mixing strategy that departs from conventional fixed intra-block attention schemes. By selectively integrating a small number of global Softmax attention layers at critical positions within a predominantly linear attention architecture, SoLA-Vision substantially enhances model expressiveness while maintaining low computational overhead. Extensive experiments demonstrate that SoLA-Vision consistently outperforms purely linear and other hybrid attention approaches on ImageNet-1K image classification as well as dense prediction tasks, achieving an advantageous balance between accuracy and efficiency.

Technology Category

Application Category

šŸ“ Abstract
Standard softmax self-attention excels in vision tasks but incurs quadratic complexity O(N^2), limiting high-resolution deployment. Linear attention reduces the cost to O(N), yet its compressed state representations can impair modeling capacity and accuracy. We present an analytical study that contrasts linear and softmax attention for visual representation learning from a layer-stacking perspective. We further conduct systematic experiments on layer-wise hybridization patterns of linear and softmax attention. Our results show that, compared with rigid intra-block hybrid designs, fine-grained layer-wise hybridization can match or surpass performance while requiring fewer softmax layers. Building on these findings, we propose SoLA-Vision (Softmax-Linear Attention Vision), a flexible layer-wise hybrid attention backbone that enables fine-grained control over how linear and softmax attention are integrated. By strategically inserting a small number of global softmax layers, SoLA-Vision achieves a strong trade-off between accuracy and computational cost. On ImageNet-1K, SoLA-Vision outperforms purely linear and other hybrid attention models. On dense prediction tasks, it consistently surpasses strong baselines by a considerable margin. Code will be released.
Problem

Research questions and friction points this paper is trying to address.

vision
self-attention
linear attention
softmax attention
computational complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

hybrid attention
linear attention
softmax attention
layer-wise design
vision transformer
šŸ”Ž Similar Papers
No similar papers found.
R
Ruibang Li
State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), CASIA; School of Artificial Intelligence, University of Chinese Academy of Sciences; Beijing Key Laboratory of Super Intelligent Security of Multi-Modal Information
Guan Luo
Guan Luo
Tsinghua University
3D generation
Y
Yiwei Zhang
State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), CASIA; School of Artificial Intelligence, University of Chinese Academy of Sciences; Beijing Key Laboratory of Super Intelligent Security of Multi-Modal Information
J
Jin Gao
State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), CASIA; School of Artificial Intelligence, University of Chinese Academy of Sciences; Beijing Key Laboratory of Super Intelligent Security of Multi-Modal Information
Bing Li
Bing Li
Professor of National Laboratory of Pattern Recognition, Institute of Automation, Chinese
Video AnalysisColor ConstancyWeb MiningMultimedia
Weiming Hu
Weiming Hu
Shanghai Jiao Tong University
Computer Architecture