DistrAttention: An Efficient and Flexible Self-Attention Mechanism on Modern GPUs

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Transformer self-attention is hindered by its O(n²) computational complexity, limiting scalability to long sequences. Existing optimization methods often compromise full-context coverage or architectural flexibility. This paper introduces DistrAttention: the first method to combine locality-sensitive hashing (LSH)-based clustering with a tunable block-wise computation framework—operating on grouped embedding dimensions—to achieve efficient approximation while preserving complete contextual information. We further propose a lightweight sampling-and-fusion strategy and jointly optimize block size to align with modern GPU memory hierarchies. DistrAttention is fully compatible with FlashAttention-2. Experiments demonstrate a 37% speedup in self-attention computation over FlashAttention-2; superior accuracy–latency trade-offs on Vision Transformers; and for Llama3-1B inference, achieves minimal latency with only a 1% accuracy drop.

Technology Category

Application Category

📝 Abstract
The Transformer architecture has revolutionized deep learning, delivering the state-of-the-art performance in areas such as natural language processing, computer vision, and time series prediction. However, its core component, self-attention, has the quadratic time complexity relative to input sequence length, which hinders the scalability of Transformers. The exsiting approaches on optimizing self-attention either discard full-contextual information or lack of flexibility. In this work, we design DistrAttention, an effcient and flexible self-attention mechanism with the full context. DistrAttention achieves this by grouping data on the embedding dimensionality, usually referred to as $d$. We realize DistrAttention with a lightweight sampling and fusion method that exploits locality-sensitive hashing to group similar data. A block-wise grouping framework is further designed to limit the errors introduced by locality sensitive hashing. By optimizing the selection of block sizes, DistrAttention could be easily integrated with FlashAttention-2, gaining high-performance on modern GPUs. We evaluate DistrAttention with extensive experiments. The results show that our method is 37% faster than FlashAttention-2 on calculating self-attention. In ViT inference, DistrAttention is the fastest and the most accurate among approximate self-attention mechanisms. In Llama3-1B, DistrAttention still achieves the lowest inference time with only 1% accuray loss.
Problem

Research questions and friction points this paper is trying to address.

Reduces quadratic complexity of self-attention in Transformers
Maintains full-context information while optimizing self-attention
Enhances GPU performance with efficient block-wise grouping
Innovation

Methods, ideas, or system contributions that make the work stand out.

Groups data on embedding dimensionality for efficiency
Uses locality-sensitive hashing for lightweight sampling
Integrates with FlashAttention-2 for GPU performance
🔎 Similar Papers
No similar papers found.