🤖 AI Summary
This work addresses the high computational cost of attention in large language model inference, where existing post-hoc sparsification methods introduce bias by relying on heuristic selection after attention computation, thereby impairing long-range reasoning. To overcome this, the authors propose Pre-hoc Sparsity (PrHS), a mechanism that selects critical key-value (KV) tokens before attention scoring using an information-theoretic criterion. They establish, for the first time, a theoretical bound linking KV dropping quality to mutual information loss, enabling unbiased and verifiable accuracy control. The designed three-dimensional (time, depth, layer) pre-hoc KV selector achieves substantial efficiency gains on LLaMA and Mistral architectures: it reduces retrieval costs by over 90% on GSM8K and CoQA, triples sparsity over HShare, incurs less than 1% average performance drop on LongBench, cuts attention FLOPs by 15%, and delivers a 9.9× latency speedup and 2.8× throughput increase on A100 GPUs.
📝 Abstract
A core bottleneck in large language model (LLM) inference is the cost of attending over the ever-growing key-value (KV) cache. Although near-oracle top-k KV selection can preserve the quality of dense attention while sharply reducing computation and bandwidth, existing sparse methods generally rely on posterior heuristics, i.e., selectors conditioned on observed attention or proxy scores. Such conditioning introduces posterior bias: it tends to distort true token importance and miss salient tokens, thereby impairing long-range reasoning. To tackle this problem, we propose Pre-hoc Sparsity (PrHS), which selects KV entries before attention scoring and provides explicit accuracy control. Let the attention mass of discarded entries be delta (the dropped mass). Through a marginal-to-mutual-information analysis, we derive an upper bound on the mutual-information loss that depends only on the dropped mass. This relation explains failure modes of posterior heuristics and enables verifiable guarantees by controlling the dropped mass in advance. Within PrHS, we instantiate three orthogonal pre-hoc selectors along the axes of time, depth, and layer. Extensive experiments on LLaMA and Mistral families validate PrHS. Across GSM8K and CoQA, PrHS reduces retrieval overhead by over 90%, achieving 3x higher retrieval sparsity than HShare at matched or better accuracy. It incurs under 1% average degradation on LongBench, lowers attention FLOPs by about 15% versus prior sparse baselines, and yields a 9.9x speedup in attention-operator latency and 2.8x higher throughput on NVIDIA A100-80GB GPUs than the dense baseline.