Longer Attention Span: Increasing Transformer Context Length with Sparse Graph Processing Techniques

📅 2025-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Transformer attention mechanisms suffer from $O(n^2)$ computational and memory complexity, severely limiting their scalability to long sequences. Method: We propose a graph-structured sparse attention paradigm: tokens are modeled as graph nodes and attention connections as edges, reframing long-context modeling as a sparse graph computation problem. Our approach integrates graph algorithms, adjacency-driven dynamic scheduling, custom CUDA kernels, and sparse mask optimization. Contribution/Results: We establish the first unified framework bridging attention and graph computation, and design a theoretically optimal workload model enabling strictly sparse (non-heuristic) attention with provable efficiency–accuracy trade-off guarantees. Evaluated on a single A100 GPU, our method supports sequences up to 160 million tokens—orders of magnitude longer than prior work. Compared to FlashAttention, it achieves significantly faster long-sequence inference and substantially reduced GPU memory consumption.

Technology Category

Application Category

📝 Abstract
Transformers have demonstrated great success in numerous domains including natural language processing and bioinformatics. This success stems from the use of the attention mechanism by these models in order to represent and propagate pairwise interactions between individual tokens of sequential data. However, the primary limitation of this operation is its quadratic memory and time complexity in relation to the input's context length - the length of a sequence over which the interactions need to be captured. This significantly limits the length of sequences that can be inferred upon by these models. Extensive research has been conducted to reduce the number of pairwise interactions to sub-quadratic in relation to the context length by introducing sparsity into the attention mechanism through the development of sparse attention masks. However, efficient implementations that achieve"true sparsity"are lacking. In this work, we address this issue by proposing a graph computing view of attention where tokens are perceived as nodes of the graph and the attention mask determines the edges of the graph. Using this view, we develop graph processing algorithms to implement the attention mechanism. Both theoretically and empirically, we demonstrate that our algorithms only perform the needed computations, i.e., they are work optimal. We also perform extensive experimentation using popular attention masks to explore the impact of sparsity on execution time and achievable context length. Our experiments demonstrate significant speedups in execution times compared to state-of-the-art attention implementations such as FlashAttention for large sequence lengths. We also demonstrate that our algorithms are able to achieve extremely long sequence lengths of as high as 160 million on a single NVIDIA A100 GPU (SXM4 80GB).
Problem

Research questions and friction points this paper is trying to address.

Extend transformer context length
Optimize sparse attention mechanisms
Achieve efficient, work-optimal computations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph computing view of attention
Work optimal graph processing algorithms
Achieves extremely long sequence lengths
🔎 Similar Papers
No similar papers found.
N
Nathaniel Tomczak
Computer and Data Sciences, Case Western Reserve University, Cleveland, OH, U.S.A.
Sanmukh Kuppannagari
Sanmukh Kuppannagari
Case Western Reserve University
Parallel ComputingAI AccelerationCombinatorial OptimizationAI4Science