ForestPrune: High-ratio Visual Token Compression for Video Multimodal Large Language Models via Spatial-Temporal Forest Modeling

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of efficient and high-ratio visual token compression methods in current video multimodal large language models, which struggle to balance performance and computational efficiency. The authors propose ForestPrune, a training-free token pruning approach that introduces, for the first time, a cross-frame spatiotemporal semantic constraint forest structure to enable global modeling of video content. By integrating semantic, spatial, and temporal constraints—and evaluating token importance based on tree depth and node roles—ForestPrune achieves globally optimal pruning decisions. Experiments demonstrate that on LLaVA-OneVision, retaining only 10% of visual tokens preserves 95.8% of average accuracy, while on MLVU, it outperforms FrameFusion by 10.1% in accuracy and accelerates pruning by 81.4%.

Technology Category

Application Category

📝 Abstract
Due to the great saving of computation and memory overhead, token compression has become a research hot-spot for MLLMs and achieved remarkable progress in image-language tasks. However, for the video, existing methods still fall short of high-ratio token compression. We attribute this shortcoming to the insufficient modeling of temporal and continual video content, and propose a novel and training-free token pruning method for video MLLMs, termed ForestPrune, which achieves effective and high-ratio pruning via Spatial-temporal Forest Modeling. In practice, ForestPrune construct token forests across video frames based on the semantic, spatial and temporal constraints, making an overall comprehension of videos. Afterwards, ForestPrune evaluates the importance of token trees and nodes based on tree depth and node roles, thereby obtaining a globally optimal pruning decision. To validate ForestPrune, we apply it to two representative video MLLMs, namely LLaVA-Video and LLaVA-OneVision, and conduct extensive experiments on a bunch of video benchmarks. The experimental results not only show the great effectiveness for video MLLMs, e.g., retaining 95.8% average accuracy while reducing 90% tokens for LLaVA-OneVision, but also show its superior performance and efficiency than the compared token compression methods, e.g., +10.1% accuracy on MLVU and -81.4% pruning time than FrameFusion on LLaVA-Video.
Problem

Research questions and friction points this paper is trying to address.

token compression
video multimodal large language models
spatial-temporal modeling
visual tokens
high-ratio compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

token compression
video multimodal LLMs
spatial-temporal modeling
training-free pruning
ForestPrune
🔎 Similar Papers
No similar papers found.
S
Shaobo Ju
Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, P.R. China.
B
Baiyang Song
Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, P.R. China.
T
Tao Chen
Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, P.R. China.
J
Jiapeng Zhang
Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, P.R. China.
Qiong Wu
Qiong Wu
Xiamen University
Computer VisionPerson Re-IdentificationVision-Language
C
Chao Chang
National University of Defense Technology.
H
HuaiXi Wang
National University of Defense Technology.
Yiyi Zhou
Yiyi Zhou
Xiamen University
deep learninglanguage and vision
R
Rongrong Ji
Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, P.R. China.