🤖 AI Summary
Transformer attention mechanisms suffer from $O(n^2)$ computational and memory complexity, severely limiting their scalability to long sequences.
Method: We propose a graph-structured sparse attention paradigm: tokens are modeled as graph nodes and attention connections as edges, reframing long-context modeling as a sparse graph computation problem. Our approach integrates graph algorithms, adjacency-driven dynamic scheduling, custom CUDA kernels, and sparse mask optimization.
Contribution/Results: We establish the first unified framework bridging attention and graph computation, and design a theoretically optimal workload model enabling strictly sparse (non-heuristic) attention with provable efficiency–accuracy trade-off guarantees. Evaluated on a single A100 GPU, our method supports sequences up to 160 million tokens—orders of magnitude longer than prior work. Compared to FlashAttention, it achieves significantly faster long-sequence inference and substantially reduced GPU memory consumption.
📝 Abstract
Transformers have demonstrated great success in numerous domains including natural language processing and bioinformatics. This success stems from the use of the attention mechanism by these models in order to represent and propagate pairwise interactions between individual tokens of sequential data. However, the primary limitation of this operation is its quadratic memory and time complexity in relation to the input's context length - the length of a sequence over which the interactions need to be captured. This significantly limits the length of sequences that can be inferred upon by these models. Extensive research has been conducted to reduce the number of pairwise interactions to sub-quadratic in relation to the context length by introducing sparsity into the attention mechanism through the development of sparse attention masks. However, efficient implementations that achieve"true sparsity"are lacking. In this work, we address this issue by proposing a graph computing view of attention where tokens are perceived as nodes of the graph and the attention mask determines the edges of the graph. Using this view, we develop graph processing algorithms to implement the attention mechanism. Both theoretically and empirically, we demonstrate that our algorithms only perform the needed computations, i.e., they are work optimal. We also perform extensive experimentation using popular attention masks to explore the impact of sparsity on execution time and achievable context length. Our experiments demonstrate significant speedups in execution times compared to state-of-the-art attention implementations such as FlashAttention for large sequence lengths. We also demonstrate that our algorithms are able to achieve extremely long sequence lengths of as high as 160 million on a single NVIDIA A100 GPU (SXM4 80GB).