🤖 AI Summary
This work addresses the quadratic complexity of self-attention, which hinders the scalability of large language models in long-context scenarios—particularly during the prefill phase. Existing sparse attention methods often overlook the cumulative influence of early tokens on subsequent information flow within the causal structure. To overcome this limitation, we propose Stem, a plug-and-play sparse attention module that incorporates a position-aware top-k token selection strategy based on positional decay and introduces an output-aware importance scoring mechanism to prioritize tokens with significant impact on the final output. This design aligns with the information accumulation property inherent in causal attention and remains fully compatible with standard Transformer architectures. Experiments demonstrate that Stem substantially reduces computational overhead and prefill latency while achieving higher accuracy than current sparse attention approaches.
📝 Abstract
The quadratic computational complexity of self-attention remains a fundamental bottleneck for scaling Large Language Models (LLMs) to long contexts, particularly during the pre-filling phase. In this paper, we rethink the causal attention mechanism from the perspective of information flow. Due to causal constraints, tokens at initial positions participate in the aggregation of every subsequent token. However, existing sparse methods typically apply a uniform top-k selection across all token positions within a layer, ignoring the cumulative dependency of token information inherent in causal architectures. To address this, we propose Stem, a novel, plug-and-play sparsity module aligned with information flow. First, Stem employs the Token Position-Decay strategy, applying position-dependent top-k within each layer to retain initial tokens for recursive dependencies. Second, to preserve information-rich tokens, Stem utilizes the Output-Aware Metric. It prioritizes high-impact tokens based on approximate output magnitude. Extensive evaluations demonstrate that Stem achieves superior accuracy with reduced computation and pre-filling latency.