🤖 AI Summary
Video language models (VideoLMs) exhibit persistent bottlenecks in temporal understanding—particularly regarding event ordering, duration estimation, and cross-frame relational reasoning. We reveal that temporal information is not encoded explicitly via positional embeddings but instead emerges implicitly through causal attention mechanisms during inter-frame interactions; moreover, cross-frame attention progressively synthesizes temporal cues along the causal path. Building on this insight, we propose two innovations: (1) a causal-structure-aware temporal exit mechanism that dynamically prunes redundant temporal computations, and (2) a staged cross-modal attention strategy that decouples local motion modeling from global temporal reasoning. Through attribution analysis and ablation studies, we validate both mechanisms’ efficacy. Our approach achieves significant improvements in temporal understanding accuracy and inference efficiency on two major benchmarks, establishing an interpretable and scalable paradigm for efficient video foundation model design.
📝 Abstract
Video language models (VideoLMs) have made significant progress in multimodal understanding. However, temporal understanding, which involves identifying event order, duration, and relationships across time, still remains a core challenge. Prior works emphasize positional encodings (PEs) as a key mechanism for encoding temporal structure. Surprisingly, we find that removing or modifying PEs in video inputs yields minimal degradation in the performance of temporal understanding. In contrast, reversing the frame sequence while preserving the original PEs causes a substantial drop. To explain this behavior, we conduct substantial analysis experiments to trace how temporal information is integrated within the model. We uncover a causal information pathway: temporal cues are progressively synthesized through inter-frame attention, aggregated in the final frame, and subsequently integrated into the query tokens. This emergent mechanism shows that temporal reasoning emerges from inter-visual token interactions under the constraints of causal attention, which implicitly encodes temporal structure. Based on these insights, we propose two efficiency-oriented strategies: staged cross-modal attention and a temporal exit mechanism for early token truncation. Experiments on two benchmarks validate the effectiveness of both approaches. To the best of our knowledge, this is the first work to systematically investigate video temporal understanding in VideoLMs, offering insights for future model improvement.