🤖 AI Summary
Video large language models (V-LLMs) suffer from quadratic computational overhead due to the explosive growth of spatiotemporal tokens. To address this, we propose STTM—a training-free spatiotemporal token merging method. STTM innovatively combines top-down quadtree-based spatial partitioning (yielding multi-granularity spatial tokens) with directional cross-frame temporal merging, enabling query-agnostic KV cache reuse. Crucially, STTM requires no fine-tuning, preserving both inference efficiency and accuracy. Evaluated on six video question answering benchmarks, STTM substantially outperforms existing compression approaches: under a 50% token budget, it achieves 2× speedup with only 0.5% accuracy drop; under a 30% budget, it attains 3× speedup with merely a 2% degradation. To our knowledge, STTM is the first method to achieve efficient, robust video understanding acceleration without any training—setting a new standard for zero-shot V-LLM optimization.
📝 Abstract
Video large language models (LLMs) achieve strong video understanding by leveraging a large number of spatio-temporal tokens, but suffer from quadratic computational scaling with token count. To address this, we propose a training-free spatio-temporal token merging method, named STTM. Our key insight is to exploit local spatial and temporal redundancy in video data which has been overlooked in prior work. STTM first transforms each frame into multi-granular spatial tokens using a coarse-to-fine search over a quadtree structure, then performs directed pairwise merging across the temporal dimension. This decomposed merging approach outperforms existing token reduction methods across six video QA benchmarks. Notably, STTM achieves a 2$ imes$ speed-up with only a 0.5% accuracy drop under a 50% token budget, and a 3$ imes$ speed-up with just a 2% drop under a 30% budget. Moreover, STTM is query-agnostic, allowing KV cache reuse across different questions for the same video. The project page is available at https://www.jshyun.me/projects/sttm.