🤖 AI Summary
Large language models (LLMs) suffer from quadratic computational and memory overhead in attention mechanisms with respect to sequence length—primarily due to KV cache bloat and high redundancy across attention heads. To address this, we propose Grouped Head Implicit Attention (GHIA), the first dual-path collaborative compression framework. GHIA reduces computational redundancy via dynamic head clustering and cross-head sharing of attention maps; simultaneously, it achieves cache-efficient compression through a learnable nonlinear value decoder and implicit low-dimensional KV projection. Critically, GHIA introduces zero additional latency. Compared to Grouped-Query Attention, it reduces attention FLOPs by 62.5%, compresses the KV cache by 70%, and doubles end-to-end inference throughput—substantially enhancing deployment efficiency in resource-constrained settings.
📝 Abstract
Attention mechanisms underpin the success of large language models (LLMs), yet their substantial computational and memory overhead poses challenges for optimizing efficiency and performance. A critical bottleneck arises as KV cache and attention computations scale rapidly with text length, challenging deployment on hardware with limited computational and memory resources. We observe that attention mechanisms exhibit substantial redundancy, since the KV cache can be significantly compressed and attention maps across heads display high similarity, revealing that much of the computation and storage is unnecessary. Leveraging these insights, we propose extbf{G}rouped-Head Laten extbf{T} extbf{A}ttention (GTA), a novel attention mechanism that reduces memory usage and computational complexity while maintaining performance. GTA comprises two components: (1) a shared attention map mechanism that reuses attention scores across multiple heads, decreasing the key cache size; and (2) a nonlinear value decoder with learned projections that compresses the value cache into a latent space, further cutting memory needs. GTA cuts attention computation FLOPs by up to emph{62.5%} versus Grouped-Query Attention and shrink the KV cache by up to emph{70%}, all while avoiding the extra overhead of Multi-Head Latent Attention to improve LLM deployment efficiency. Consequently, GTA models achieve a emph{2x} increase in end-to-end inference speed, with prefill benefiting from reduced computational cost and decoding benefiting from the smaller cache footprint.