HGCA: Hybrid GPU-CPU Attention for Long Context LLM Inference

📅 2025-07-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address GPU memory bottlenecks—particularly KV cache bloat—in long-context reasoning, this paper proposes a hybrid CPU-GPU attention mechanism: recent KV caches are retained on GPU for dense attention computation, while fine-grained per-head sparse attention is executed in parallel on CPU. Cross-device outputs are integrated via log-sum-exp fusion. The method requires no model retraining and balances computational efficiency with accuracy. Key technical contributions include KV cache partitioning, cross-device sparse scheduling, fused output normalization, and lightweight CPU-optimized pruning. Experiments on commodity GPU hardware demonstrate substantial throughput improvements for long-sequence and high-batch inference, achieving accuracy close to full attention and outperforming mainstream sparse attention baselines in overall performance.

Technology Category

Application Category

📝 Abstract
Scaling inference for large language models (LLMs) is increasingly constrained by limited GPU memory, especially due to growing key-value (KV) caches required for long-context generation. While existing approaches offload KV caches to CPU memory or apply sparse attention to reduce GPU load, they often underutilize CPU compute resources and compromise accuracy. We present HGCA, a hybrid CPU-GPU attention mechanism that enables scalable, high-throughput LLM inference with near-full attention quality. HGCA performs dense attention on recently generated KV entries retained in GPU memory and parallel sparse attention on selected, salient KV entries in CPU memory. The attention outputs are efficiently merged using log-sum-exp fusion, minimizing PCIe transfer overhead. HGCA also introduces a finegrained, per-head sparsification strategy optimized for CPU execution, preserving contextual relevance while reducing computation. Our implementation seamlessly integrates into existing LLM frameworks without requiring model retraining. Experiments across diverse models and workloads show that HGCA achieves superior scalability, supports longer sequences and larger batch sizes, and outperforms existing sparse attention baselines in both performance and accuracy -- all on commodity GPU hardware.
Problem

Research questions and friction points this paper is trying to address.

Scaling LLM inference constrained by GPU memory limits
Existing CPU offloading underutilizes compute and reduces accuracy
Need hybrid CPU-GPU attention for long-context high-throughput inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid CPU-GPU attention mechanism
Log-sum-exp fusion for efficient merging
Per-head sparsification optimized for CPU
🔎 Similar Papers
No similar papers found.
W
Weishu Deng
The University of Texas at Arlington
Y
Yujie Yang
The University of Texas at Arlington
P
Peiran Du
The University of Texas at Arlington
L
Lingfeng Xiang
The University of Texas at Arlington
Z
Zhen Lin
The University of Texas at Arlington
C
Chen Zhong
The University of Texas at Arlington
S
Song Jiang
The University of Texas at Arlington
Hui Lu
Hui Lu
Department of Computer Science and Engineering (CSE), the University of Texas at Arlington (UTA)
Cloud ComputingVirtualizationFile and Storage SystemsComputer NetworksComputer Systems
Jia Rao
Jia Rao
The University of Texas at Arlington
Cloud computingdistributed systemsmachine learningreinforcement learning