SpecVLM: Enhancing Speculative Decoding of Video LLMs via Verifier-Guided Token Pruning

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high memory and computational overhead in video large language models (Vid-LLMs) caused by dense video tokenization, this paper proposes a training-free, two-stage speculative decoding framework. The method first leverages the robustness of draft models to video token pruning and then introduces a verification-model-guided, attention-aware pruning strategy: critical spatiotemporal tokens are identified using attention heatmaps from the verification model, followed by fine-grained compression via spatially uniform sampling. Crucially, this approach achieves zero accuracy degradation in generated outputs while significantly accelerating inference. Experiments on four mainstream video understanding benchmarks demonstrate up to a 2.68× speedup in decoding latency and substantial reductions in resource consumption. The work establishes a novel paradigm for efficient video-language reasoning without compromising generation fidelity.

Technology Category

Application Category

📝 Abstract
Video large language models (Vid-LLMs) have shown strong capabilities in understanding video content. However, their reliance on dense video token representations introduces substantial memory and computational overhead in both prefilling and decoding. To mitigate the information loss of recent video token reduction methods and accelerate the decoding stage of Vid-LLMs losslessly, we introduce SpecVLM, a training-free speculative decoding (SD) framework tailored for Vid-LLMs that incorporates staged video token pruning. Building on our novel finding that the draft model's speculation exhibits low sensitivity to video token pruning, SpecVLM prunes up to 90% of video tokens, enabling efficient speculation without sacrificing accuracy. To achieve this, it performs a two-stage pruning process: Stage I selects highly informative tokens guided by attention signals from the verifier (target model), while Stage II prunes remaining redundant ones in a spatially uniform manner. Extensive experiments on four video understanding benchmarks demonstrate the effectiveness and robustness of SpecVLM, which achieves up to 2.68$ imes$ decoding speedup for LLaVA-OneVision-72B and 2.11$ imes$ speedup for Qwen2.5-VL-32B.
Problem

Research questions and friction points this paper is trying to address.

Accelerates video LLM decoding without information loss
Reduces computational overhead from dense video tokens
Prunes redundant tokens while maintaining speculation accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free speculative decoding for video LLMs
Two-stage token pruning with verifier guidance
Up to 90% video token reduction without accuracy loss
🔎 Similar Papers
No similar papers found.
Y
Yicheng Ji
The State Key Laboratory of Blockchain and Data Security, Zhejiang University
J
Jun Zhang
The State Key Laboratory of Blockchain and Data Security, Zhejiang University
Heming Xia
Heming Xia
Natural Language Processing Group, The Hong Kong Polytechnic University
Natural Language ProcessingLarge Language Models
Jinpeng Chen
Jinpeng Chen
City University of Hong Kong
Continual LearningMultimodal Large Language Model
Lidan Shou
Lidan Shou
Professor of Computer Science, Zhejiang University
DatabaseData & Knowledge ManagementML Systems
G
Gang Chen
The State Key Laboratory of Blockchain and Data Security, Zhejiang University
H
Huan Li
The State Key Laboratory of Blockchain and Data Security, Zhejiang University