🤖 AI Summary
To address the quadratic computational complexity of vision-language large models (VLLMs) induced by excessively long visual token sequences in long-video processing, this paper proposes a training-free, dynamic token compression method. The approach leverages the intrinsic query-driven key-frame prior encoded in VLLM self-attention mechanisms—previously unrecognized—to assess frame-level token importance, enabling semantic-aware, continuously adjustable compression that overcomes the limitations of conventional binary key-frame sampling. Compatible with mainstream acceleration frameworks such as VisionZip and FastV, the method achieves a 4.3× inference speedup on state-of-the-art models including LLaVA-OneVision and Qwen2.5-VL, with zero accuracy degradation. It establishes a new Pareto-optimal trade-off between efficiency and fidelity and requires no model modification or fine-tuning, offering true plug-and-play deployment.
📝 Abstract
Recent advances in Video Large Language Models (VLLMs) have achieved remarkable video understanding capabilities, yet face critical efficiency bottlenecks due to quadratic computational growth with lengthy visual token sequences of long videos. While existing keyframe sampling methods can improve temporal modeling efficiency, additional computational cost is introduced before feature encoding, and the binary frame selection paradigm is found suboptimal. Therefore, in this work, we propose Dynamic Token compression via LLM-guided Keyframe prior (DyToK), a training-free paradigm that enables dynamic token compression by harnessing VLLMs' inherent attention mechanisms. Our analysis reveals that VLLM attention layers naturally encoding query-conditioned keyframe priors, by which DyToK dynamically adjusts per-frame token retention ratios, prioritizing semantically rich frames while suppressing redundancies. Extensive experiments demonstrate that DyToK achieves state-of-the-art efficiency-accuracy tradeoffs. DyToK shows plug-and-play compatibility with existing compression methods, such as VisionZip and FastV, attaining 4.3x faster inference while preserving accuracy across multiple VLLMs, such as LLaVA-OneVision and Qwen2.5-VL. Code is available at https://github.com/yu-lin-li/DyToK .