Less Is More, but Where? Dynamic Token Compression via LLM-Guided Keyframe Prior

📅 2025-12-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the quadratic computational complexity of vision-language large models (VLLMs) induced by excessively long visual token sequences in long-video processing, this paper proposes a training-free, dynamic token compression method. The approach leverages the intrinsic query-driven key-frame prior encoded in VLLM self-attention mechanisms—previously unrecognized—to assess frame-level token importance, enabling semantic-aware, continuously adjustable compression that overcomes the limitations of conventional binary key-frame sampling. Compatible with mainstream acceleration frameworks such as VisionZip and FastV, the method achieves a 4.3× inference speedup on state-of-the-art models including LLaVA-OneVision and Qwen2.5-VL, with zero accuracy degradation. It establishes a new Pareto-optimal trade-off between efficiency and fidelity and requires no model modification or fine-tuning, offering true plug-and-play deployment.

Technology Category

Application Category

📝 Abstract
Recent advances in Video Large Language Models (VLLMs) have achieved remarkable video understanding capabilities, yet face critical efficiency bottlenecks due to quadratic computational growth with lengthy visual token sequences of long videos. While existing keyframe sampling methods can improve temporal modeling efficiency, additional computational cost is introduced before feature encoding, and the binary frame selection paradigm is found suboptimal. Therefore, in this work, we propose Dynamic Token compression via LLM-guided Keyframe prior (DyToK), a training-free paradigm that enables dynamic token compression by harnessing VLLMs' inherent attention mechanisms. Our analysis reveals that VLLM attention layers naturally encoding query-conditioned keyframe priors, by which DyToK dynamically adjusts per-frame token retention ratios, prioritizing semantically rich frames while suppressing redundancies. Extensive experiments demonstrate that DyToK achieves state-of-the-art efficiency-accuracy tradeoffs. DyToK shows plug-and-play compatibility with existing compression methods, such as VisionZip and FastV, attaining 4.3x faster inference while preserving accuracy across multiple VLLMs, such as LLaVA-OneVision and Qwen2.5-VL. Code is available at https://github.com/yu-lin-li/DyToK .
Problem

Research questions and friction points this paper is trying to address.

Reduces computational cost in video language models
Improves token compression without additional training
Enhances efficiency while maintaining model accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic token compression via LLM-guided keyframe prior
Training-free paradigm using VLLMs' inherent attention mechanisms
Adjusts per-frame token retention ratios to prioritize rich frames
🔎 Similar Papers
No similar papers found.
Yulin Li
Yulin Li
The Hong Kong University of Science and Technology
Optimiation TheoryRobot Motion Planning&Control
H
Haokun Gui
The Hong Kong University of Science and Technology
Z
Ziyang Fan
Harbin Institute of Technology (Shenzhen)
J
Junjie Wang
Harbin Institute of Technology (Shenzhen)
B
Bin Kang
University of Chinese Academy of Sciences
B
Bin Chen
Harbin Institute of Technology (Shenzhen), University of Chinese Academy of Sciences
Zhuotao Tian
Zhuotao Tian
Professor, Harbin Institute of Technology (Shenzhen)
Vision-language ModelMulti-modal PerceptionComputer Vision