Striking the Right Balance between Compute and Copy: Improving LLM Inferencing Under Speculative Decoding

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In large language model (LLM) inference, KV cache updates—specifically memory allocation, copying, and strided in-place updates—become the dominant overhead as sequence length grows, leading to high GPU costs and underutilized CPU resources. To address this, we propose Balancing Memory and Compute (BMC), a novel KV cache mechanism. BMC’s key contributions are: (1) a dynamic redundant-row allocation strategy that minimizes memory redundancy while preserving in-place update capability; (2) explicit reuse of redundant computations for speculative decoding, enabling joint optimization of memory operations and compute; and (3) lightweight analytical modeling for efficient adaptation across heterogeneous hardware. Experiments across multi-CPU/GPU platforms show that BMC achieves 3.2×, 1.36×, and 2.29× higher throughput than HuggingFace Transformers, vLLM, and DeepSpeed, respectively, significantly reducing per-token inference cost.

Technology Category

Application Category

📝 Abstract
With the skyrocketing costs of GPUs and their virtual instances in the cloud, there is a significant desire to use CPUs for large language model (LLM) inference. KV cache update, often implemented as allocation, copying, and in-place strided update for each generated token, incurs significant overhead. As the sequence length increases, the allocation and copy overheads dominate the performance. Alternate approaches may allocate large KV tensors upfront to enable in-place updates, but these matrices (with zero-padded rows) cause redundant computations. In this work, we propose a new KV cache allocation mechanism called Balancing Memory and Compute (BMC). BMC allocates, once every r iterations, KV tensors with r redundant rows, allowing in-place update without copy overhead for those iterations, but at the expense of a small amount of redundant computation. Second, we make an interesting observation that the extra rows allocated in the KV tensors and the resulting redundant computation can be repurposed for Speculative Decoding (SD) that improves token generation efficiency. Last, BMC represents a spectrum of design points with different values of r. To identify the best-performing design point(s), we derive a simple analytical model for BMC. The proposed BMC method achieves an average throughput acceleration of up to 3.2x over baseline HuggingFace (without SD). Importantly when we apply BMC with SD, it results in an additional speedup of up to 1.39x, over and above the speedup offered by SD. Further, BMC achieves a throughput acceleration of up to 1.36x and 2.29x over state-of-the-art inference servers vLLM and DeepSpeed, respectively. Although the BMC technique is evaluated extensively across different classes of CPUs (desktop and server class), we also evaluate the scheme with GPUs and demonstrate that it works well for GPUs.
Problem

Research questions and friction points this paper is trying to address.

Reducing KV cache allocation and copy overhead in LLM inference
Balancing memory operations with computational efficiency for token generation
Optimizing speculative decoding performance while minimizing redundant computations
Innovation

Methods, ideas, or system contributions that make the work stand out.

BMC allocates KV cache with redundant rows periodically
Repurposes redundant computation for speculative decoding efficiency
Analytical model identifies optimal design points for performance
Arun Ramachandran
Arun Ramachandran
Advanced Micro Devices (AMD)
R
Ramaswamy Govindarajan
Indian Institute of Science (IISc)
Murali Annavaram
Murali Annavaram
USC
Computer Systems
Prakash Raghavendra
Prakash Raghavendra
Advanced Micro Devices (AMD)
H
Hossein Entezari Zarch
University of Southern California (USC)
L
Lei Gao
University of Southern California (USC)
C
Chaoyi Jiang
University of Southern California (USC)