š¤ AI Summary
This work addresses the high computational complexity of standard Softmax self-attention in vision tasks, which scales quadratically with sequence length (O(N²)), and the limited modeling capacity of linear attention despite its linear complexity (O(N)). To overcome these limitations, the authors propose SoLA-Vision, a fine-grained inter-layer mixing strategy that departs from conventional fixed intra-block attention schemes. By selectively integrating a small number of global Softmax attention layers at critical positions within a predominantly linear attention architecture, SoLA-Vision substantially enhances model expressiveness while maintaining low computational overhead. Extensive experiments demonstrate that SoLA-Vision consistently outperforms purely linear and other hybrid attention approaches on ImageNet-1K image classification as well as dense prediction tasks, achieving an advantageous balance between accuracy and efficiency.
š Abstract
Standard softmax self-attention excels in vision tasks but incurs quadratic complexity O(N^2), limiting high-resolution deployment. Linear attention reduces the cost to O(N), yet its compressed state representations can impair modeling capacity and accuracy. We present an analytical study that contrasts linear and softmax attention for visual representation learning from a layer-stacking perspective. We further conduct systematic experiments on layer-wise hybridization patterns of linear and softmax attention. Our results show that, compared with rigid intra-block hybrid designs, fine-grained layer-wise hybridization can match or surpass performance while requiring fewer softmax layers. Building on these findings, we propose SoLA-Vision (Softmax-Linear Attention Vision), a flexible layer-wise hybrid attention backbone that enables fine-grained control over how linear and softmax attention are integrated. By strategically inserting a small number of global softmax layers, SoLA-Vision achieves a strong trade-off between accuracy and computational cost. On ImageNet-1K, SoLA-Vision outperforms purely linear and other hybrid attention models. On dense prediction tasks, it consistently surpasses strong baselines by a considerable margin. Code will be released.