๐ค AI Summary
Standard sliding window attention (SWA) in Transformers employs a fixed window size across all attention heads and layers, limiting its ability to capture multi-scale contextual information. Method: This paper proposes Multi-Scale Window Attention (MSWA), which dynamically assigns heterogeneous window sizes across attention heads and network layers, with window sizes increasing progressively with layer depthโenabling fine-grained local modeling in shallow layers and long-range dependency capture in deeper layers. Contribution/Results: MSWA introduces the first cross-head and cross-layer coordinated window-size design, integrating a hierarchical, progressive window growth mechanism. While preserving O(L) linear time and memory complexity, MSWA significantly outperforms standard local attention on language modeling and commonsense reasoning benchmarks, achieving higher accuracy and faster convergence.
๐ Abstract
Transformer-based LLMs have achieved exceptional performance across a wide range of NLP tasks. However, the standard self-attention mechanism suffers from quadratic time complexity and linearly increased cache size. Sliding window attention (SWA) solves this problem by restricting the attention range to a fixed-size local context window. Nevertheless, SWA employs a uniform window size for each head in each layer, making it inefficient in capturing context of varying scales. To mitigate this limitation, we propose Multi-Scale Window Attention (MSWA) which applies diverse window sizes across heads and layers in the Transformer. It not only allows for different window sizes among heads within the same layer but also progressively increases window size allocation from shallow to deep layers, thus enabling the model to capture contextual information with different lengths and distances. Experimental results on language modeling and common-sense reasoning tasks substantiate that MSWA outperforms traditional local attention in both effectiveness and efficiency.