FrameFusion: Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models

📅 2024-12-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Long-video, high-resolution inputs cause explosive growth of visual tokens and severe inter-frame redundancy in Large Vision-Language Models (LVLMs). Method: We propose a collaborative compression framework integrating similarity-driven token clustering-fusion with importance-weighted dynamic pruning. It leverages multi-layer similarity analysis to enable adaptive frame-level token aggregation and dynamic pruning, compatible with mainstream LVLM architectures (e.g., LLaVA-Video, MiniCPM-V). Contribution/Results: We first reveal a critical property in LVLMs: visual token similarity distributions converge and ranking stabilizes with increasing network depth. This insight enables the first incorporation of similarity-based fusion into token reduction—overcoming limitations of pure importance-based pruning. Experiments on multiple video understanding, question answering, and retrieval benchmarks show a 70% reduction in visual tokens, 3.4–4.4× acceleration in LLM inference, and 1.6–1.9× end-to-end speedup, with average performance degradation under 3%.

Technology Category

Application Category

📝 Abstract
The increasing demand to process long and high-resolution videos significantly burdens Large Vision-Language Models (LVLMs) due to the enormous number of visual tokens. Existing token reduction methods primarily focus on importance-based token pruning, which overlooks the redundancy caused by frame resemblance and repetitive visual elements. In this paper, we analyze the high vision token similarities in LVLMs. We reveal that token similarity distribution condenses as layers deepen while maintaining ranking consistency. Leveraging the unique properties of similarity over importance, we introduce FrameFusion, a novel approach that combines similarity-based merging with importance-based pruning for better token reduction in LVLMs. FrameFusion identifies and merges similar tokens before pruning, opening up a new perspective for token reduction. We evaluate FrameFusion on diverse LVLMs, including Llava-Video-{7B,32B,72B}, and MiniCPM-V-8B, on video understanding, question-answering, and retrieval benchmarks. Experiments show that FrameFusion reduces vision tokens by 70$%$, achieving 3.4-4.4x LLM speedups and 1.6-1.9x end-to-end speedups, with an average performance impact of less than 3$%$. Our code is available at https://github.com/thu-nics/FrameFusion.
Problem

Research questions and friction points this paper is trying to address.

Information Overload
Similarity and Redundancy
Efficiency in Processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

FrameFusion
Information Similarity
Efficiency Enhancement
🔎 Similar Papers
No similar papers found.