Efficient Attention Mechanisms for Large Language Models: A Survey

📅 2025-07-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The quadratic time and memory complexity of Transformer self-attention severely hinders efficient long-context modeling. This paper presents a systematic survey and reconstruction of efficient attention mechanisms for large language models, proposing the first unified taxonomy encompassing both linearization paradigms (e.g., kernel-based approximations and fast weight dynamics) and sparsification paradigms (e.g., fixed patterns, block-wise routing, and clustering-driven selection). It innovatively integrates algorithmic design with hardware-aware optimization, clarifying integration pathways for purely efficient attention and hybrid architectures in large-scale pretraining. Furthermore, it establishes a comprehensive reference framework spanning theoretical analysis, algorithmic implementation, and engineering deployment. The work delivers a systematic design paradigm and practical guidelines for scalable long-context language models.

Technology Category

Application Category

📝 Abstract
Transformer-based architectures have become the prevailing backbone of large language models. However, the quadratic time and memory complexity of self-attention remains a fundamental obstacle to efficient long-context modeling. To address this limitation, recent research has introduced two principal categories of efficient attention mechanisms. Linear attention methods achieve linear complexity through kernel approximations, recurrent formulations, or fastweight dynamics, thereby enabling scalable inference with reduced computational overhead. Sparse attention techniques, in contrast, limit attention computation to selected subsets of tokens based on fixed patterns, block-wise routing, or clustering strategies, enhancing efficiency while preserving contextual coverage. This survey provides a systematic and comprehensive overview of these developments, integrating both algorithmic innovations and hardware-level considerations. In addition, we analyze the incorporation of efficient attention into largescale pre-trained language models, including both architectures built entirely on efficient attention and hybrid designs that combine local and global components. By aligning theoretical foundations with practical deployment strategies, this work aims to serve as a foundational reference for advancing the design of scalable and efficient language models.
Problem

Research questions and friction points this paper is trying to address.

Address quadratic complexity of self-attention in Transformers
Survey linear and sparse efficient attention mechanisms
Analyze integration of efficient attention in large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linear attention methods reduce complexity via kernel approximations.
Sparse attention techniques limit computation to token subsets.
Hybrid designs combine local and global attention components.
🔎 Similar Papers
No similar papers found.
Yutao Sun
Yutao Sun
Tsinghua University
Natural Language ProcessingMachine Learning
Z
Zhenyu Li
Tsinghua University
Y
Yike Zhang
Tsinghua University
T
Tengyu Pan
Tsinghua University
B
Bowen Dong
Tsinghua University
Y
Yuyi Guo
Tsinghua University
J
Jianyong Wang
Tsinghua University