Efficient LLM Inference with Activation Checkpointing and Hybrid Caching

📅 2025-01-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low computational efficiency in large language model (LLM) inference under GPU memory constraints—caused by frequent data migration—this paper proposes HybridServe. Its core innovation is a novel, synergistic hybrid caching mechanism jointly managing KV caches and activation checkpoints: it enables recomputation skipping for projection and FFN layers while, for the first time, leveraging cached activations to accelerate KV cache reconstruction—breaking the conventional token-ID-only recomputation paradigm. HybridServe further integrates hierarchical cache management, heterogeneous memory scheduling, and a dynamic KV/activation ratio optimization algorithm. Experiments demonstrate that HybridServe achieves 2.19× higher throughput than the state-of-the-art offloading approach, significantly alleviating GPU compute idleness and PCIe bandwidth bottlenecks.

Technology Category

Application Category

📝 Abstract
Recent large language models (LLMs) with enormous model sizes use many GPUs to meet memory capacity requirements incurring substantial costs for token generation. To provide cost-effective LLM inference with relaxed latency constraints, extensive research has focused on expanding GPU memory by leveraging the host memory. However, LLM inference engines that utilize the host memory often face underutilization of GPU compute units, as a considerable portion of inference time is spent in loading the model onto the GPU via host-GPU interconnect. To tackle these challenges of the host memory offloading for LLM, we introduce HybridServe, an LLM inference system with activation checkpointing based on activation caching. The activation cache stores activation checkpoints generated during intermediate inference stages, allowing the fast recomputation of KV cache while model parameters are transferred to GPU from host memory. Unlike conventional methods that recompute the KV cache from scratch using token IDs, the activation cache allows bypassing projection and FFN operations. To balance between the activation recomputation and parameter loading overhead, this study proposes a KV-activation hybrid caching scheme which finds the best ratio of the key-value and activation caches to adjust the recomputation time. Our system achieves 2.19x throughput improvement over the state-of-the-art prior work for offloading both model weights and KV cache.
Problem

Research questions and friction points this paper is trying to address.

Mixed Storage Approach
GPU Computational Efficiency
Data Migration
Innovation

Methods, ideas, or system contributions that make the work stand out.

HybridServe
Activation Caching
Efficient Resource Utilization
🔎 Similar Papers
No similar papers found.
S
Sanghyeon Lee
KAIST
H
Hongbeen Kim
KAIST
S
Soojin Hwang
KAIST
Guseul Heo
Guseul Heo
Ph.D. student
M
Minwoo Noh
KAIST
Jaehyuk Huh
Jaehyuk Huh
KAIST
Computer ArchitectureOperating SystemsSystem Security