Switch Attention: Towards Dynamic and Fine-grained Hybrid Transformers

๐Ÿ“… 2026-03-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limitations of conventional Transformer architectures, where full self-attention incurs high computational complexity and sliding window attention suffers from restricted receptive fields. Existing hybrid models rely on static routing mechanisms that fail to adapt to dynamic computational demands. To overcome these challenges, we propose Switch Attention (SwiAttn), a novel mechanism that dynamically selects, for each token at every Transformer layer, between full attention and sliding window attention pathsโ€”enabling the first token-level fine-grained dynamic routing in attention computation. We optimize computational efficiency through an adaptive regularization objective and facilitate architectural transfer via continued pretraining. Extensive evaluation across 23 benchmark datasets with context lengths of 4K and 32K demonstrates significant improvements in both performance and efficiency for long-context modeling.
๐Ÿ“ Abstract
The attention mechanism has been the core component in modern transformer architectures. However, the computation of standard full attention scales quadratically with the sequence length, serving as a major bottleneck in long-context language modeling. Sliding window attention restricts the context length for better efficiency at the cost of narrower receptive fields. While existing efforts attempt to take the benefits from both sides by building hybrid models, they often resort to static, heuristically designed alternating patterns that limit efficient allocation of computation in various scenarios. In this paper, we propose Switch Attention (SwiAttn), a novel hybrid transformer that enables dynamic and fine-grained routing between full attention and sliding window attention. For each token at each transformer layer, SwiAttn dynamically routes the computation to either a full-attention branch for global information aggregation or a sliding-window branch for efficient local pattern matching. An adaptive regularization objective is designed to encourage the model towards efficiency. Moreover, we adopt continual pretraining to optimize the model, transferring the full attention architecture to the hybrid one. Extensive experiments are conducted on twenty-three benchmark datasets across both regular (4K) and long (32K) context lengths, demonstrating the effectiveness of the proposed method.
Problem

Research questions and friction points this paper is trying to address.

attention mechanism
long-context modeling
computational efficiency
hybrid transformers
receptive field
Innovation

Methods, ideas, or system contributions that make the work stand out.

Switch Attention
Dynamic Routing
Hybrid Transformer
Efficient Attention
Continual Pretraining
๐Ÿ”Ž Similar Papers
No similar papers found.
Yusheng Zhao
Yusheng Zhao
Peking University
LLMMultimodal LearningTransfer LearningSpatio-temporal ForecastingGNN
H
Hourun Li
State Key Laboratory for Multimedia Information Processing, School of Computer Science, PKU-Anker LLM Lab, Peking University
B
Bohan Wu
State Key Laboratory for Multimedia Information Processing, School of Computer Science, PKU-Anker LLM Lab, Peking University
Jingyang Yuan
Jingyang Yuan
Peking University
LLMAI for Science
Meng Zhang
Meng Zhang
Huawei Noah's Ark Lab
machine learningnatural language processing
Yichun Yin
Yichun Yin
Noah's Ark Lab, Huawei
LLM
Lifeng Shang
Lifeng Shang
Huawei Noah's Ark Lab
Machine LearningComputer VisionPattern ReconitionNatural Language Processing
M
Ming Zhang
State Key Laboratory for Multimedia Information Processing, School of Computer Science, PKU-Anker LLM Lab, Peking University