MSWA: Refining Local Attention with Multi-ScaleWindow Attention

๐Ÿ“… 2025-01-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Standard sliding window attention (SWA) in Transformers employs a fixed window size across all attention heads and layers, limiting its ability to capture multi-scale contextual information. Method: This paper proposes Multi-Scale Window Attention (MSWA), which dynamically assigns heterogeneous window sizes across attention heads and network layers, with window sizes increasing progressively with layer depthโ€”enabling fine-grained local modeling in shallow layers and long-range dependency capture in deeper layers. Contribution/Results: MSWA introduces the first cross-head and cross-layer coordinated window-size design, integrating a hierarchical, progressive window growth mechanism. While preserving O(L) linear time and memory complexity, MSWA significantly outperforms standard local attention on language modeling and commonsense reasoning benchmarks, achieving higher accuracy and faster convergence.

Technology Category

Application Category

๐Ÿ“ Abstract
Transformer-based LLMs have achieved exceptional performance across a wide range of NLP tasks. However, the standard self-attention mechanism suffers from quadratic time complexity and linearly increased cache size. Sliding window attention (SWA) solves this problem by restricting the attention range to a fixed-size local context window. Nevertheless, SWA employs a uniform window size for each head in each layer, making it inefficient in capturing context of varying scales. To mitigate this limitation, we propose Multi-Scale Window Attention (MSWA) which applies diverse window sizes across heads and layers in the Transformer. It not only allows for different window sizes among heads within the same layer but also progressively increases window size allocation from shallow to deep layers, thus enabling the model to capture contextual information with different lengths and distances. Experimental results on language modeling and common-sense reasoning tasks substantiate that MSWA outperforms traditional local attention in both effectiveness and efficiency.
Problem

Research questions and friction points this paper is trying to address.

Efficient Attention Mechanism
Transformer Models
Language Understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Scale Window Attention
Efficient Text Processing
Enhanced Language Understanding
๐Ÿ”Ž Similar Papers
No similar papers found.
Yixing Xu
Yixing Xu
AMD
machine learningdeep learning
Shivank Nag
Shivank Nag
PhD Student, The University of Illinois Urbana-Champaign
Deep LearningIn-Silico ProteomicsMolecular Dynamics
D
Dong Li
Advanced Micro Devices, Inc., Beijing, China
L
Lu Tian
Advanced Micro Devices, Inc., Beijing, China
E
E. Barsoum
Advanced Micro Devices, Inc., Beijing, China