🤖 AI Summary
This work addresses the limitations of existing multimodal large language models (MLLMs) in long video understanding, which suffer from high computational overhead and suboptimal frame sampling strategies. The authors propose a training-free, efficient inference framework that innovatively integrates a multi-query reasoning mechanism with a clip-level slow-fast frame sampling strategy to adaptively fuse local details and global context. Without incurring any additional training cost, the method significantly enhances long video comprehension performance, achieving up to a 6.9% absolute improvement in average accuracy across three benchmarks—MLVU, LongVideoBench, and VideoMME—while matching or surpassing prior approaches at only 50% of the inference time.
📝 Abstract
Recent progress in multi-modal large language models (MLLMs) has significantly advanced video understanding. However, their performance on long-form videos remains limited by computational constraints and suboptimal frame selection. We present Think-Clip-Sample (TCS), a training-free framework that enhances long video understanding through two key components: (i) Multi-Query Reasoning, which generates multiple queries to capture complementary aspects of the question and video; and (ii) Clip-level Slow-Fast Sampling, which adaptively balances dense local details and sparse global context. Extensive experiments on MLVU, LongVideoBench, and VideoMME demonstrate that TCS consistently improves performance across different MLLMs, boosting up to 6.9% accuracy, and is capable of achieving comparable accuracy with 50% fewer inference time cost, highlighting both efficiency and efficacy of TCS on long video understanding.