๐ค AI Summary
This work addresses the limitations of conventional Transformer architectures, where full self-attention incurs high computational complexity and sliding window attention suffers from restricted receptive fields. Existing hybrid models rely on static routing mechanisms that fail to adapt to dynamic computational demands. To overcome these challenges, we propose Switch Attention (SwiAttn), a novel mechanism that dynamically selects, for each token at every Transformer layer, between full attention and sliding window attention pathsโenabling the first token-level fine-grained dynamic routing in attention computation. We optimize computational efficiency through an adaptive regularization objective and facilitate architectural transfer via continued pretraining. Extensive evaluation across 23 benchmark datasets with context lengths of 4K and 32K demonstrates significant improvements in both performance and efficiency for long-context modeling.
๐ Abstract
The attention mechanism has been the core component in modern transformer architectures. However, the computation of standard full attention scales quadratically with the sequence length, serving as a major bottleneck in long-context language modeling. Sliding window attention restricts the context length for better efficiency at the cost of narrower receptive fields. While existing efforts attempt to take the benefits from both sides by building hybrid models, they often resort to static, heuristically designed alternating patterns that limit efficient allocation of computation in various scenarios. In this paper, we propose Switch Attention (SwiAttn), a novel hybrid transformer that enables dynamic and fine-grained routing between full attention and sliding window attention. For each token at each transformer layer, SwiAttn dynamically routes the computation to either a full-attention branch for global information aggregation or a sliding-window branch for efficient local pattern matching. An adaptive regularization objective is designed to encourage the model towards efficiency. Moreover, we adopt continual pretraining to optimize the model, transferring the full attention architecture to the hybrid one. Extensive experiments are conducted on twenty-three benchmark datasets across both regular (4K) and long (32K) context lengths, demonstrating the effectiveness of the proposed method.