Stem: Rethinking Causal Information Flow in Sparse Attention

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the quadratic complexity of self-attention, which hinders the scalability of large language models in long-context scenarios—particularly during the prefill phase. Existing sparse attention methods often overlook the cumulative influence of early tokens on subsequent information flow within the causal structure. To overcome this limitation, we propose Stem, a plug-and-play sparse attention module that incorporates a position-aware top-k token selection strategy based on positional decay and introduces an output-aware importance scoring mechanism to prioritize tokens with significant impact on the final output. This design aligns with the information accumulation property inherent in causal attention and remains fully compatible with standard Transformer architectures. Experiments demonstrate that Stem substantially reduces computational overhead and prefill latency while achieving higher accuracy than current sparse attention approaches.

Technology Category

Application Category

📝 Abstract
The quadratic computational complexity of self-attention remains a fundamental bottleneck for scaling Large Language Models (LLMs) to long contexts, particularly during the pre-filling phase. In this paper, we rethink the causal attention mechanism from the perspective of information flow. Due to causal constraints, tokens at initial positions participate in the aggregation of every subsequent token. However, existing sparse methods typically apply a uniform top-k selection across all token positions within a layer, ignoring the cumulative dependency of token information inherent in causal architectures. To address this, we propose Stem, a novel, plug-and-play sparsity module aligned with information flow. First, Stem employs the Token Position-Decay strategy, applying position-dependent top-k within each layer to retain initial tokens for recursive dependencies. Second, to preserve information-rich tokens, Stem utilizes the Output-Aware Metric. It prioritizes high-impact tokens based on approximate output magnitude. Extensive evaluations demonstrate that Stem achieves superior accuracy with reduced computation and pre-filling latency.
Problem

Research questions and friction points this paper is trying to address.

causal attention
sparse attention
information flow
computational complexity
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse Attention
Causal Information Flow
Token Position-Decay
Output-Aware Metric
Plug-and-Play Sparsity
🔎 Similar Papers
No similar papers found.
L
Lin Niu
Tencent
Xin Luo
Xin Luo
University of Science and Technology of China
Computer Vision
L
Linchuan Xie
Tencent
Y
Yifu Sun
Tencent
G
Guanghua Yu
Tencent
J
Jianchen Zhu
Tencent
S
S Kevin Zhou
University of Science and Technology of China