Test-Time Temporal Sampling for Efficient MLLM Video Understanding

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the quadratic computational complexity and inefficiency of self-attention in multimodal large language models (MLLMs) when processing long videos, this paper proposes T3S—a training-free, plug-and-play test-time inference framework. T3S leverages spatiotemporal redundancy in videos as a computational advantage: it dynamically subsamples the input video into multiple short, complementary clips, which are then encoded and fused in parallel within a single forward pass. The method requires no architectural modification or model fine-tuning. Consequently, computational complexity is reduced from O(L²) to O(∑αᵢ²L²), where αᵢ denotes clip length ratios. Evaluated on multiple long-video understanding benchmarks, T3S achieves up to 3.1% average accuracy gain, reduces first-token latency by 2.04×, and maintains full compatibility with diverse pre-trained MLLMs at negligible integration cost.

Technology Category

Application Category

📝 Abstract
Processing long videos with multimodal large language models (MLLMs) poses a significant computational challenge, as the model's self-attention mechanism scales quadratically with the number of video tokens, resulting in high computational demand and slow inference speed. Current solutions, such as rule-based sub-sampling, learned frame selector, or memory-based summarization, often introduce their own trade-offs: they compromise accuracy, necessitate additional training, or decrease inference speed. In this paper, we propose Test-Time Temporal Sampling (T3S), a training-free, plug-and-play inference wrapper that enables MLLMs to process long videos both efficiently and effectively. T3S exploits spatiotemporal redundancy by generating multiple short and diverse subsequences of video tokens at inference time, packing them within a single forward pass, and aggregating their predictions. This multi-subsequence formulation broadens visual coverage while reducing the computational cost of self-attention from $O(L^2)$ to $O(sum_{i=1}^m α_i^2L^2)$, where $sum_{i=1}^m α_i^2 < 1$. Extensive experiments on long video understanding benchmarks demonstrate that T3S improves accuracy by up to 3.1% and reduces first token delay by $2.04 imes$, all with minimal integration effort. Our approach operates entirely at inference time, requires no model modifications or fine-tuning, and is compatible with a wide range of pretrained MLLMs. T3S turns video redundancy into a computational advantage, offering a scalable solution for long-video understanding. The code is available at https://github.com/kaibinwang3/T3S.
Problem

Research questions and friction points this paper is trying to address.

Addresses computational inefficiency in long video processing with MLLMs
Reduces quadratic self-attention cost while maintaining video understanding accuracy
Eliminates need for model retraining through test-time temporal sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Test-time temporal sampling for efficient video understanding
Generates diverse video subsequences within single forward pass
Reduces self-attention cost while improving accuracy
🔎 Similar Papers
No similar papers found.