Dovetail: A CPU/GPU Heterogeneous Speculative Decoding for LLM inference

📅 2024-12-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the hardware heterogeneity of consumer-grade devices—characterized by relatively weak GPUs and powerful CPUs—this paper proposes a CPU/GPU-coordinated speculative decoding framework: a lightweight draft model runs on the GPU to generate candidate tokens, while the target model performs parallel verification on the CPU, substantially reducing cross-device communication overhead. Key contributions include: (1) the first CPU-GPU division-of-labor paradigm for speculative decoding; (2) a depth-enhanced lightweight draft model; (3) a Dynamic Gating Fusion (DGF) mechanism to improve efficiency in feature-embedding fusion; and (4) adaptive optimization of candidate token count to balance latency and throughput. On HumanEval, LLaMA2-Chat-7B achieves 5.86 tokens/s using only 3 GB VRAM—2.77× faster than CPU-only inference—and reaches 8 tokens/s with 7 GB VRAM, significantly enhancing on-device large language model inference efficiency.

Technology Category

Application Category

📝 Abstract
Due to the high resource demands of Large Language Models (LLMs), achieving widespread deployment on consumer-grade devices presents significant challenges. Typically, personal or consumer-grade devices, including servers configured prior to the era of large-scale models, generally have relatively weak GPUs and relatively strong CPUs. However, most current methods primarily depend on GPUs for computation. Therefore, we propose Dovetail, an approach that deploys the draft model on the GPU to generate draft tokens while allowing the target model to perform parallel verification on the CPU, thereby improving the utilization of all available hardware resources and occupying less inter-device communication bandwidth. Accordingly, we have redesigned the draft model to better align with heterogeneous hardware characteristics. To this end, we implemented several optimizations: reducing the number of draft tokens to mitigate latency in parallel verification, increasing the depth of the draft model to enhance its predictive capacity, and introducing DGF (Dynamic Gating Fusion) to improve the integration of features and token embeddings. In the HumanEval benchmark, Dovetail achieved an inference speed of 5.86 tokens per second for LLaMA2-Chat-7B using 3GB of VRAM, representing an approximately 2.77x improvement over CPU-only inference. Furthermore, the inference speed was increased to 8 tokens per second when utilizing 7GB of VRAM.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Resource-constrained Devices
CPU-GPU Synergy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dovetail Technology
Resource Optimization
Efficient Inference
🔎 Similar Papers
No similar papers found.
L
Libo Zhang
National University of Defense Technology, Changsha, China
Zhaoning Zhang
Zhaoning Zhang
National University of Defense Technology
MLSysCompute VisionDistributed Computing
B
Baizhou Xu
National University of Defense Technology, Changsha, China
S
Songzhu Mei
National University of Defense Technology, Changsha, China
D
Dongsheng Li
National University of Defense Technology, Changsha, China