FlashPrefill: Instantaneous Pattern Discovery and Thresholding for Ultra-Fast Long-Context Prefilling

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the severe computational bottleneck faced by large language models during the prefilling stage of long-context processing, stemming from the quadratic complexity of attention mechanisms. The authors propose a dynamic sparse attention method that instantly identifies vertical, diagonal, and block-wise sparsity patterns in attention maps. By incorporating a dynamic thresholding mechanism that eliminates the need for sorting or accumulating attention scores, the approach effectively suppresses the long-tail distribution of attention weights, enabling stable acceleration across multi-scale context lengths. The method integrates rapid block search and threshold-based pruning strategies, achieving a 27.78× speedup on 256K-token sequences while maintaining a 1.71× speedup even on shorter 4K-token contexts—significantly outperforming existing sparse attention approaches.

Technology Category

Application Category

📝 Abstract
Long-context modeling is a pivotal capability for Large Language Models, yet the quadratic complexity of attention remains a critical bottleneck, particularly during the compute-intensive prefilling phase. While various sparse attention mechanisms have been explored, they typically suffer from either significant search latency or insufficient sparsity. In this paper, we propose FlashPrefill, a framework enabling ultra-fast prefilling via instantaneous pattern discovery and thresholding. FlashPrefill leverages a fast block-searching technique to simultaneously locate dynamic vertical, slash, and block-sparse attention patterns. Crucially, it introduces a dynamic thresholding mechanism that bypasses the prohibitive overhead of sorting or accumulating attention scores while effectively eliminating the long-tail distribution to enhance sparsity. Extensive evaluations demonstrate that FlashPrefill achieves a substantial leap in efficiency, delivering an unprecedented 27.78x speedup on 256K sequences. Notably, unlike existing methods that incur efficiency degradation on shorter contexts, FlashPrefill maintains a 1.71x speedup even at a 4K context length, demonstrating its robustness and practical utility across varying sequence scales.
Problem

Research questions and friction points this paper is trying to address.

long-context modeling
attention complexity
prefilling bottleneck
sparse attention
quadratic complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

FlashPrefill
sparse attention
dynamic thresholding
long-context prefilling
pattern discovery
🔎 Similar Papers
No similar papers found.