VOCABTRIM: Vocabulary Pruning for Efficient Speculative Decoding in LLMs

📅 2025-06-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high memory overhead and low inference efficiency in speculative decoding (SpD) caused by large target-model vocabularies, this paper proposes VocabTrim—a training-free, lightweight vocabulary pruning method. Its core innovation lies in the first systematic identification of redundancy in full-vocabulary sampling during SpD; accordingly, VocabTrim retains only the most frequent tokens from the target model’s vocabulary to construct a compact sub-vocabulary, and enables efficient adaptation via vocabulary remapping and lightweight drafter head tuning. Crucially, it requires no modifications to the target model or additional training. VocabTrim significantly alleviates memory bottlenecks while preserving accuracy. On Spec-Bench, it achieves a 16% improvement in memory-bound speedup for Llama-3.2-3B-Instruct. The method particularly enhances generation efficiency in resource-constrained settings, such as edge devices, without compromising decoding quality or introducing computational overhead from retraining.

Technology Category

Application Category

📝 Abstract
In this paper, we introduce a simple training-free technique to improve the performance of drafter-based speculative decoding (SpD) methods that incorporates language modeling head (LM head) during drafting process. A drafter-based speculative decoding leverages one or more smaller language models, a.k.a. drafters or draft models, to sample a draft sequence or tree consisting of multiple tokens, followed by verification by a base LLM, a target model, accepting a subset as its valid generation. As it is usually considered that the speculative decoding requires one-to-one mapping between vocabularies of the target model and the draft model, it has been natural to share the vocabulary between them, or even share the LM head as in EAGLE or Medusa. We first identify that this draft token sampling scheme inherently contains an unnecessary inference overhead in drafting, especially for some target LLMs with very large vocabularies. Then, we propose a simple technique, VocabTrim, to mitigate the drafting overhead to improve the generation speed in memory-bound environment. VocabTrim reconstructs the drafter LM head to contain only a limited set of tokens, selected by the most frequently sampled from the vocabulary of the target model. While limiting the vocabulary in drafting slightly degrades the acceptance rate, it significantly reduces the drafting latency in memory-bound process which is often the case on edge devices, resulting in higher memory-bound speed up (MBSU). We show that our method can boost the memory-bound speed-up for Llama-3 models on Spec-Bench, specifically by 16% for Llama-3.2-3B-Instruct.
Problem

Research questions and friction points this paper is trying to address.

Reduces drafting overhead in speculative decoding for LLMs
Improves generation speed in memory-bound environments
Optimizes drafter vocabulary for faster token sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vocabulary pruning for efficient speculative decoding
Training-free technique to reduce drafting overhead
Reconstructs drafter LM head with frequent tokens
🔎 Similar Papers
No similar papers found.
Raghavv Goel
Raghavv Goel
Qualcomm AI Research
efficient LLMsdeep learningreinforcement learningcontrol theory
S
Sudhanshu Agrawal
Qualcomm AI Research. Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.
Mukul Gagrani
Mukul Gagrani
Qualcomm AI Research
Efficient LLMReinforcement LearningCombinatorial OptimizationStochastic Control
J
Junyoung Park
Qualcomm AI Research. Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.
Y
Yifan Zao
Qualcomm AI Research. Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.
H
He Zhang
Qualcomm AI Research. Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.
T
Tian Liu
Qualcomm AI Research. Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.
Y
Yiping Yang
Qualcomm AI Research. Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.
X
Xin Yuan
Qualcomm AI Research. Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.
J
Jiuyan Lu
Qualcomm AI Research. Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.
C
Chris Lott
Qualcomm AI Research. Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.
Mingu Lee
Mingu Lee
Qualcomm AI Research
AIMLLLMSignal processing