🤖 AI Summary
To address throughput bottlenecks in GPU memory-constrained LLM online inference caused by unbounded KV cache growth during autoregressive decoding, this paper proposes a dynamic heterogeneous scheduling framework enabling deep CPU-GPU parallelism in the decoding phase. Its core innovation is a novel bandwidth-aware scheduling strategy based on fine-grained performance prediction, integrated with hybrid memory management and a low-overhead runtime system to maximize compute-communication overlap without incurring additional latency. Compared to vLLM, our approach achieves 84–96% and 11–89% throughput improvement on T4 and A10 GPUs, respectively. In long-output scenarios, it further outperforms the state-of-the-art hybrid scheduler by 49% and 37% on these platforms. The framework significantly enhances real-time decoding performance for edge and cost-sensitive deployments.
📝 Abstract
Deploying large language models (LLMs) for online inference is often constrained by limited GPU memory, particularly due to the growing KV cache during auto-regressive decoding. Hybrid GPU-CPU execution has emerged as a promising solution by offloading KV cache management and parts of attention computation to the CPU. However, a key bottleneck remains: existing schedulers fail to effectively overlap CPU-offloaded tasks with GPU execution during the latency-critical, bandwidth-bound decode phase. This particularly penalizes real-time, decode-heavy applications (e.g., chat, Chain-of-Thought reasoning) which are currently underserved by existing systems, especially under memory pressure typical of edge or low-cost deployments. We present APEX, a novel, profiling-informed scheduling strategy that maximizes CPU-GPU parallelism during hybrid LLM inference. Unlike systems relying on static rules or purely heuristic approaches, APEX dynamically dispatches compute across heterogeneous resources by predicting execution times of CPU and GPU subtasks to maximize overlap while avoiding scheduling overheads.We evaluate APEX on diverse workloads and GPU architectures (NVIDIA T4, A10), using LLaMa-2-7B and LLaMa-3.1-8B models. Compared to GPU-only schedulers like VLLM, APEX improves throughput by 84% - 96% on T4 and 11% - 89% on A10 GPUs, while preserving latency. Against the best existing hybrid schedulers, it delivers up to 49% (T4) and 37% (A10) higher throughput in long-output settings.APEX significantly advances hybrid LLM inference efficiency on such memory-constrained hardware and provides a blueprint for scheduling in heterogeneous AI systems, filling a critical gap for efficient real-time LLM applications.