🤖 AI Summary
To address GPU memory bottlenecks—particularly KV cache bloat—in long-context reasoning, this paper proposes a hybrid CPU-GPU attention mechanism: recent KV caches are retained on GPU for dense attention computation, while fine-grained per-head sparse attention is executed in parallel on CPU. Cross-device outputs are integrated via log-sum-exp fusion. The method requires no model retraining and balances computational efficiency with accuracy. Key technical contributions include KV cache partitioning, cross-device sparse scheduling, fused output normalization, and lightweight CPU-optimized pruning. Experiments on commodity GPU hardware demonstrate substantial throughput improvements for long-sequence and high-batch inference, achieving accuracy close to full attention and outperforming mainstream sparse attention baselines in overall performance.
📝 Abstract
Scaling inference for large language models (LLMs) is increasingly constrained by limited GPU memory, especially due to growing key-value (KV) caches required for long-context generation. While existing approaches offload KV caches to CPU memory or apply sparse attention to reduce GPU load, they often underutilize CPU compute resources and compromise accuracy. We present HGCA, a hybrid CPU-GPU attention mechanism that enables scalable, high-throughput LLM inference with near-full attention quality. HGCA performs dense attention on recently generated KV entries retained in GPU memory and parallel sparse attention on selected, salient KV entries in CPU memory. The attention outputs are efficiently merged using log-sum-exp fusion, minimizing PCIe transfer overhead. HGCA also introduces a finegrained, per-head sparsification strategy optimized for CPU execution, preserving contextual relevance while reducing computation. Our implementation seamlessly integrates into existing LLM frameworks without requiring model retraining. Experiments across diverse models and workloads show that HGCA achieves superior scalability, supports longer sequences and larger batch sizes, and outperforms existing sparse attention baselines in both performance and accuracy -- all on commodity GPU hardware.