π€ AI Summary
This work addresses the significant performance degradation commonly observed in video large language models when applying speculative decoding, primarily caused by attention dilution and negative visual gain. To overcome these limitations, the authors propose a novel paradigm that fully offloads visual computation to the target model through text-anchored windowed attention and a visual semantic snapshot mechanism. The draft model is trained using intermediate-layer visual states as bridges and incorporates a multi-token prediction strategy to mitigate the distribution shift between training and inference. By eliminating redundant raw visual inputs and leveraging the modelβs internalized visual semantics, the method achieves an average 2.82Γ speedup on long videos containing 25k visual tokens, substantially alleviating performance degradation and enabling real-time long-form video understanding.
π Abstract
Although speculative decoding is widely used to accelerate Vision-Language Models (VLMs) inference, it faces severe performance collapse when applied to Video Large Language Models (Vid-LLMs). The draft model typically falls into the trap of attention dilution and negative visual gain due to key-value cache explosion and context window mismatches. We observe a visual semantic internalization phenomenon in Vid-LLMs, indicating that critical visual semantics are implicitly encoded into text hidden states during deep-layer interactions, which renders raw visual inputs structurally redundant during deep inference. To address this, we propose the Sparrow framework, which first utilizes visually-aware text-anchored window attention via hidden state reuse to fully offload visual computation to the target model, and leverages intermediate-layer visual state bridging to train the draft model with semantic-rich intermediate states, thereby filtering out low-level visual noise. Additionally, a multi-token prediction strategy is introduced to bridge the training-inference distribution shift. Experiments show that Sparrow achieves an average speedup of 2.82x even with 25k visual tokens, effectively resolving the performance degradation in long sequences and offering a practical solution for real-time long video tasks.