HATA: Trainable and Hardware-Efficient Hash-Aware Top-k Attention for Scalable Large Model Inference

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In LLM inference, attention computation remains a critical bottleneck; existing Top-k sparsification methods struggle to balance efficiency and accuracy. This paper proposes a hash-aware trainable Top-k attention mechanism: for the first time, it integrates a lightweight learnable hash function into the Top-k selection process, using binary hash codes to efficiently approximate the relative similarity ordering between queries and keys—thereby avoiding costly absolute attention score computations. Concurrently, the method jointly optimizes the KV cache. Evaluated across multiple mainstream LLMs and downstream tasks, it achieves zero accuracy loss relative to the dense baseline while accelerating inference by up to 7.2×—outperforming all prior Top-k approaches. The approach demonstrates strong generalization and effectiveness without architectural modification or fine-tuning.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have emerged as a pivotal research area, yet the attention module remains a critical bottleneck in LLM inference, even with techniques like KVCache to mitigate redundant computations. While various top-$k$ attention mechanisms have been proposed to accelerate LLM inference by exploiting the inherent sparsity of attention, they often struggled to strike a balance between efficiency and accuracy. In this paper, we introduce HATA (Hash-Aware Top-$k$ Attention), a novel approach that systematically integrates low-overhead learning-to-hash techniques into the Top-$k$ attention process. Different from the existing top-k attention methods which are devoted to seeking an absolute estimation of qk score, typically with a great cost, HATA maps queries and keys into binary hash codes, and acquires the relative qk score order with a quite low cost, which is sufficient for realizing top-k attention. Extensive experiments demonstrate that HATA achieves up to 7.2$ imes$ speedup compared to vanilla full attention while maintaining model accuracy. In addition, HATA outperforms the state-of-the-art top-$k$ attention methods in both accuracy and efficiency across multiple mainstream LLM models and diverse tasks. HATA is open source at https://github.com/gpzlx1/HATA.
Problem

Research questions and friction points this paper is trying to address.

Addresses attention module bottleneck in LLM inference
Balances efficiency and accuracy in top-k attention
Integrates low-cost learning-to-hash for relative qk scoring
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates learning-to-hash into Top-k attention
Uses binary hash codes for low-cost scoring
Achieves speedup while maintaining model accuracy
🔎 Similar Papers
No similar papers found.
P
Ping Gong
University of Science and Technology of China, Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Huawei Technologies
Jiawei Yi
Jiawei Yi
University of Science and Technology of China
AI System
S
Shengnan Wang
Huawei Technologies
J
Juncheng Zhang
University of Science and Technology of China
Zewen Jin
Zewen Jin
University of Science and Technology of China
LLM Training / ServingMoEServerless Computing
O
Ouxiang Zhou
University of Science and Technology of China
Ruibo Liu
Ruibo Liu
RS @Google DeepMind
ASI
G
Guanbin Xu
University of Science and Technology of China
Y
Youhui Bai
Huawei Technologies
B
Bowen Ye
Peking University
K
Kun Yuan
Peking University
T
Tong Yang
Peking University
G
Gong Zhang
Huawei Technologies
Renhai Chen
Renhai Chen
Tianjin University
Feng Wu
Feng Wu
National University of Singapore
Mechine LearningMedical Time Series
C
Cheng Li
University of Science and Technology of China, Institute of Artificial Intelligence, Hefei Comprehensive National Science Center