π€ AI Summary
To address the high computational cost of attention over long contexts, this paper proposes a semantic hashingβbased sparse attention mechanism. It formulates key-token identification as a learnable hash recommendation task in Hamming space, mapping keys and queries to binary hash codes via differentiable projections and leveraging bitwise operations for efficient retrieval of highly relevant tokens. This work is the first to deeply integrate semantic similarity modeling with hash-based retrieval, enabling large-scale token pruning without significant quality degradation. Evaluated on Llama-3.1-8B with LongBench, it achieves a 32Γ sparsity ratio with only a 0.6-point average performance drop. Inference speed improves by 3β6Γ over LightLLM and 2.5β4.5Γ over gpt-fast on an NVIDIA L4 GPU, while maintaining balanced efficiency, accuracy, and memory footprint (32 bits/token).
π Abstract
Utilizing longer contexts is increasingly essential to power better AI systems. However, the cost of attending to long contexts is high due to the involved softmax computation. While the scaled dot-product attention (SDPA) exhibits token sparsity, with only a few pivotal tokens significantly contributing to attention, leveraging this sparsity effectively remains an open challenge. Previous methods either suffer from model degradation or require considerable additional resources. We propose HashAttention --a principled approach casting pivotal token identification as a recommendation problem. Given a query, HashAttention encodes keys and queries in Hamming space capturing the required semantic similarity using learned mapping functions. HashAttention efficiently identifies pivotal tokens for a given query in this Hamming space using bitwise operations, and only these pivotal tokens are used for attention computation, significantly improving overall attention efficiency. HashAttention can reduce the number of tokens used by a factor of $1/32 imes$ for the Llama-3.1-8B model with LongBench, keeping average quality loss within 0.6 points, while using only 32 bits per token auxiliary memory. At $32 imes$ sparsity, HashAttention is $3{-}6 imes$ faster than LightLLM and $2.5{-}4.5 imes$ faster than gpt-fast on Nvidia-L4 GPU.