MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding

📅 2024-08-20
🏛️ arXiv.org
📈 Citations: 17
Influential: 1
📄 PDF
🤖 AI Summary
Large language models (LLMs) with long contexts (e.g., 32K tokens) face a fundamental trade-off between low latency and high throughput during batched inference. Speculative decoding (SD), while promising, suffers from diminishing returns under large batch sizes due to quadratic KV-cache memory growth with both batch size and sequence length. Method: This work first empirically identifies the scaling law of SD speedup w.r.t. batch size. It then introduces a sparse KV-cache-aware intelligent draft generation strategy to alleviate memory bottlenecks. Further, it integrates dynamic bottleneck profiling with 8×A100 multi-GPU parallel inference for holistic optimization in mid-to-long sequence regimes (32–256 batch size). Contribution/Results: Our approach achieves 2.0× and 1.84× inference speedup on LLaMA-2-7B-32K and LLaMA-3.1-8B, respectively, without accuracy loss. It overturns the conventional view that SD fails under high-throughput settings and establishes a new paradigm for efficient long-context inference.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have become more prevalent in long-context applications such as interactive chatbots, document analysis, and agent workflows, but it is challenging to serve long-context requests with low latency and high throughput. Speculative decoding (SD) is a widely used technique to reduce latency without sacrificing performance but the conventional wisdom suggests that its efficacy is limited to small batch sizes. In MagicDec, we show that surprisingly SD can achieve speedup even for a high throughput inference regime for moderate to long sequences. More interestingly, an intelligent drafting strategy can achieve better speedup with increasing batch size based on our rigorous analysis. MagicDec first identifies the bottleneck shifts with increasing batch size and sequence length, and uses these insights to deploy speculative decoding more effectively for high throughput inference. Then, it leverages draft models with sparse KV cache to address the KV bottleneck that scales with both sequence length and batch size. This finding underscores the broad applicability of speculative decoding in long-context serving, as it can enhance throughput and reduce latency without compromising accuracy. For moderate to long sequences, we demonstrate up to 2x speedup for LLaMA-2-7B-32K and 1.84x speedup for LLaMA-3.1-8B when serving batch sizes ranging from 32 to 256 on 8 NVIDIA A100 GPUs. The code is available at https://github.com/Infini-AI-Lab/MagicDec/.
Problem

Research questions and friction points this paper is trying to address.

Reducing latency and increasing throughput in long-context LLM applications
Optimizing speculative decoding for high batch sizes and long sequences
Enhancing KV cache efficiency to improve speedup in inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses speculative decoding for high throughput
Leverages sparse KV cache draft model
Proposes optimal drafting strategy model
🔎 Similar Papers
No similar papers found.
J
Jian Chen
Carnegie Mellon University
V
Vashisth Tiwari
Carnegie Mellon University
R
Ranajoy Sadhukhan
Carnegie Mellon University
Zhuoming Chen
Zhuoming Chen
PhD student, Carnegie Mellon University
Computer SystemsMachine Learning
J
Jinyuan Shi
Moffett AI
Ian En-Hsu Yen
Ian En-Hsu Yen
Moffett AI
Beidi Chen
Beidi Chen
Carnegie Mellon University
Machine Learning