🤖 AI Summary
This work addresses the computational inefficiency of video large language models when processing high-density visual tokens, a challenge exacerbated by existing acceleration methods that inadequately exploit spatiotemporal correlations. To overcome this limitation, the authors propose a training-free inference acceleration framework that jointly models spatial and temporal redundancy without requiring any retraining. The approach introduces two key components: Attention- and Diversity-aware Token Selection (ADTS) and Tree-structured SpatioTemporal Merging (TSTM), enabling fine-grained, plug-and-play token compression. Remarkably, the method retains 99.1% of original performance while preserving only 10% of the visual tokens, supports a tenfold increase in input frame count on Qwen2.5-VL, and achieves a relative performance gain of 8.6%.
📝 Abstract
Although Video Large Language Models (VLLMs) have shown remarkable capabilities in video understanding, they are required to process high volumes of visual tokens, causing significant computational inefficiency. Existing VLLMs acceleration frameworks usually compress spatial and temporal redundancy independently, which overlooks the spatiotemporal relationships, thereby leading to suboptimal spatiotemporal compression. The highly correlated visual features are likely to change in spatial position, scale, orientation, and other attributes over time due to the dynamic nature of video. Building on this insight, we introduce FlashVID, a training-free inference acceleration framework for VLLMs. Specifically, FlashVID utilizes Attention and Diversity-based Token Selection (ADTS) to select the most representative tokens for basic video representation, then applies Tree-based Spatiotemporal Token Merging (TSTM) for fine-grained spatiotemporal redundancy elimination. Extensive experiments conducted on three representative VLLMs across five video understanding benchmarks demonstrate the effectiveness and generalization of our method. Notably, by retaining only 10% of visual tokens, FlashVID preserves 99.1% of the performance of LLaVA-OneVision. Consequently, FlashVID can serve as a training-free and plug-and-play module for extending long video frames, which enables a 10x increase in video frame input to Qwen2.5-VL, resulting in a relative improvement of 8.6% within the same computational budget. Code is available at https://github.com/Fanziyang-v/FlashVID.