🤖 AI Summary
This work addresses the inefficiency of autoregressive decoding in video large language models, which stems from the excessive number of video tokens and leads to suboptimal hardware utilization. Existing pruning-based acceleration methods offer limited speedup and often incur significant information loss. To overcome these limitations, we propose the first training-free parallel speculative decoding framework that achieves substantial efficiency gains in long-video scenarios through a two-stage parallelization strategy and unbiased verification-guided pruning. The core innovation lies in vision-aligned speculative decoding, which eliminates positional bias inherent in attention-guided pruning and breaks the mutual waiting bottleneck between draft and target models. Experiments demonstrate 3.36× and 2.42× decoding speedups on LLaVA-OneVision-72B and Qwen2.5-VL-32B, respectively, with draft window extensions of 1.6–1.8× while maintaining high token acceptance rates.
📝 Abstract
Although current Video-LLMs achieve impressive performance in video understanding tasks, their autoregressive decoding efficiency remains constrained by the massive number of video tokens. Visual token pruning can partially ease this bottleneck, yet existing approaches still suffer from information loss and yield only modest acceleration in decoding. In this paper, we propose ParallelVLM, a training-free draft-then-verify speculative decoding framework that overcomes both mutual waiting and limited speedup-ratio problems between draft and target models in long-video settings. ParallelVLM features two parallelized stages that maximize hardware utilization and incorporate an Unbiased Verifier-Guided Pruning strategy to better align the draft and target models by eliminating the positional bias in attention-guided pruning. Extensive experiments demonstrate that ParallelVLM effectively expands the draft window by $1.6\sim1.8\times$ with high accepted lengths, and accelerates various video understanding benchmarks by 3.36$\times$ on LLaVA-Onevision-72B and 2.42$\times$ on Qwen2.5-VL-32B compared with vanilla autoregressive decoding.