Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention

📅 2025-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standard attention incurs prohibitive O(n²) computational complexity for long-context modeling. To address this, we propose Native Trainable Sparse Attention (NSA), a novel sparse attention mechanism with dynamic hierarchical sparsity: coarse-grained global compression coupled with fine-grained preservation of critical local interactions. NSA introduces the first arithmetic-intensity-balanced, hardware-aligned kernel design, enabling end-to-end differentiability without dense pretraining. Its native gradient propagation ensures that sparsity patterns adaptively optimize during training. Experiments on 64K-length sequences demonstrate substantial speedups across forward pass, backward pass, and autoregressive decoding. Pretrained models using NSA match or exceed full-attention baselines on general language understanding, long-context question answering, and reasoning benchmarks—achieving both high efficiency and strong representational capacity.

Technology Category

Application Category

📝 Abstract
Long-context modeling is crucial for next-generation language models, yet the high computational cost of standard attention mechanisms poses significant computational challenges. Sparse attention offers a promising direction for improving efficiency while maintaining model capabilities. We present NSA, a Natively trainable Sparse Attention mechanism that integrates algorithmic innovations with hardware-aligned optimizations to achieve efficient long-context modeling. NSA employs a dynamic hierarchical sparse strategy, combining coarse-grained token compression with fine-grained token selection to preserve both global context awareness and local precision. Our approach advances sparse attention design with two key innovations: (1) We achieve substantial speedups through arithmetic intensity-balanced algorithm design, with implementation optimizations for modern hardware. (2) We enable end-to-end training, reducing pretraining computation without sacrificing model performance. As shown in Figure 1, experiments show the model pretrained with NSA maintains or exceeds Full Attention models across general benchmarks, long-context tasks, and instruction-based reasoning. Meanwhile, NSA achieves substantial speedups over Full Attention on 64k-length sequences across decoding, forward propagation, and backward propagation, validating its efficiency throughout the model lifecycle.
Problem

Research questions and friction points this paper is trying to address.

Efficient long-context modeling
Hardware-aligned sparse attention
End-to-end trainable mechanism
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hardware-aligned sparse attention
Dynamic hierarchical sparse strategy
End-to-end trainable design
🔎 Similar Papers
No similar papers found.
Jingyang Yuan
Jingyang Yuan
Peking University
LLMAI for Science
H
Huazuo Gao
DeepSeek-AI
D
Damai Dai
DeepSeek-AI
J
Junyu Luo
Key Laboratory for Multimedia Information Processing, Peking University, PKU-Anker LLM Lab
L
Liang Zhao
DeepSeek-AI
Zhengyan Zhang
Zhengyan Zhang
Tsinghua University
Natural Language ProcessingLarge Language Models
Z
Zhenda Xie
DeepSeek-AI
Y
Y. X. Wei
DeepSeek-AI
Lean Wang
Lean Wang
Peking University
Large Language Models
Zhiping Xiao
Zhiping Xiao
Postdoc at University of Washington
CSEDMML
Y
Yuqing Wang
DeepSeek-AI
C
Chong Ruan
DeepSeek-AI
M
Ming Zhang
Key Laboratory for Multimedia Information Processing, Peking University, PKU-Anker LLM Lab
Wenfeng Liang
Wenfeng Liang
Professor, Shenyang Jianzhu Univ, SIA, UCAS, CAS
Micro-/Nano-roboticsAcousticsLab on a ChipOptofluidics
W
Wangding Zeng
DeepSeek-AI