🤖 AI Summary
Extreme token compression (e.g., >99.9% reduction) in video large language models (VLMs) severely distorts spatiotemporal modeling, degrading long-video understanding.
Method: We propose a dynamic video token representation framework: (1) formally define the novel task of *extreme short-token compression*; (2) decouple visual content from grid-level motion to construct a compact token backbone and a token dynamics graph; (3) introduce cross-dynamics attention to fuse motion semantics without increasing token count. Our method integrates token decoupling, object-level clustering, grid-motion representation, and adaptive/fixed-length compression subtasks.
Results: The framework reduces theoretical complexity by compressing tokens to just 0.07% of the original number, while incurring only a 1.13% performance drop on downstream tasks and achieving substantial throughput gains—enabling efficient, high-fidelity long-video–language understanding.
📝 Abstract
Token-based video representation has emerged as a promising approach for enabling large language models to interpret video content. However, existing token reduction techniques, such as token pruning and token merging, often disrupt essential spatial-temporal positional embeddings, failing to adequately balance computational efficiency with fewer tokens. Consequently, these methods result in relatively lengthy token sequences, limiting their applicability in scenarios requiring extreme token compression, such as video large language models. In this paper, we introduce the novel task of extreme short token reduction, aiming to represent extensive video sequences with a minimal number of tokens. To address this challenge, we propose Token Dynamics, a new video representation framework that dynamically reduces token count while preserving spatial-temporal coherence. Specifically, we disentangle video representations by separating visual embeddings from grid-level motion information, structuring them into: 1. a concise token base, created by clustering tokens that describe object-level content; 2. a token dynamics map, capturing detailed spatial-temporal motion patterns across grids. Furthermore, we introduce a cross-dynamics attention mechanism that integrates motion features into the token base without increasing token length, thereby maintaining compactness and spatial-temporal integrity. The experiments demonstrate a reduction of token count to merely 0.07% of the original tokens, with only a minor performance drop of 1.13%. Additionally, we propose two novel subtasks within extreme token reduction (fixed-length and adaptive-length compression), both effectively representing long token sequences for video-language tasks. Our method offers significantly lower theoretical complexity, fewer tokens, and enhanced throughput, thus providing an efficient solution for video LLMs.