๐ค AI Summary
Large language models (LLMs) suffer from substantial KV cache memory overhead and severe performance degradation under sub-bit quantization. To address this, we propose Anchor Token-Aware Sub-Bit Vector Quantization (ATASVQ), a novel framework that introduces the Anchor Scoreโa metric quantifying token-level sensitivity to KV quantization errorโand dynamically preserves full-precision representations for high-sensitivity tokens. Our method integrates Triton-accelerated online anchor selection with FlashAttention-compatible design for efficient deployment. Evaluated on LLaMA-3-8B, ATASVQ enables single-GPU inference with up to 840K context length and achieves a 3.5ร decoding throughput improvement. On Mistral-7B, it attains perplexities of 6.32 (1-bit) and 8.87 (0.375-bit), substantially outperforming prior quantization approaches while maintaining practical efficiency.
๐ Abstract
Quantization has emerged as an effective and lightweight solution to reduce the memory footprint of the KV cache in Large Language Models (LLMs). Nevertheless, minimizing the performance degradation caused by ultra-low-bit KV cache quantization remains a significant challenge. We observe that quantizing the KV cache of different tokens has varying impacts on the quality of attention outputs. To systematically investigate this phenomenon, we perform forward error propagation analysis on attention and propose the Anchor Score (AnS) that quantifies the sensitivity of each token's KV cache to quantization-induced error. Our analysis reveals significant disparities in AnS across tokens, suggesting that preserving a small subset with full precision (FP16) of high-AnS tokens can greatly mitigate accuracy loss in aggressive quantization scenarios. Based on this insight, we introduce AnTKV, a novel framework that leverages Anchor Token-aware Vector Quantization to compress the KV cache. Furthermore, to support efficient deployment, we design and develop a triton kernel that is fully compatible with FlashAttention, enabling fast online Anchor Token selection. AnTKV enables LLaMA-3-8B to handle context lengths up to 840K tokens on a single 80GB A100 GPU, while achieving up to 3.5x higher decoding throughput compared to the FP16 baseline. Our experiment results demonstrate that AnTKV matches or outperforms prior works such as KIVI, SKVQ, KVQuant, and CQ under 4-bit settings. More importantly, AnTKV achieves significantly lower perplexity under ultra-low-bit quantization on Mistral-7B, with only 6.32 at 1-bit and 8.87 at 0.375-bit, compared to the FP16 baseline of 4.73.