🤖 AI Summary
Video-language models (Video-LLMs) suffer from logical inconsistency when answering temporal rephrasing questions, severely undermining their reliability and practical utility. This work is the first to identify—through an interpretability lens—the root cause: cross-modal attention heads inherently struggle to distinguish temporally ordered video tokens. To address this, we propose Temporal-Conditional Attention Sharpening (TCAS), a training-time intervention that enhances the temporal discriminability of cross-modal attention without modifying the model architecture. TCAS introduces a temporal-aware attention divergence objective, explicitly encouraging attention heads to attend differentially across sequential video frames conditioned on linguistic queries. Experiments demonstrate that TCAS significantly improves logical consistency on temporal rephrasing tasks and achieves state-of-the-art performance on general video temporal grounding benchmarks. These results underscore the critical role of temporal consistency in robust video-language understanding.
📝 Abstract
Large language models (LLMs) often generate self-contradictory outputs, which severely impacts their reliability and hinders their adoption in practical applications. In video-language models (Video-LLMs), this phenomenon recently draws the attention of researchers. Specifically, these models fail to provide logically consistent responses to rephrased questions based on their grounding outputs. However, the underlying causes of this phenomenon remain underexplored. In this work, we adopt an interpretability-driven approach to analyze, statistically summarize, and intervention the potential factors of the phenomenon. We find that one of the primary reasons for the inconsistency in responses lies in the inability of cross-modal attention heads to effectively distinguish video tokens across different timestamps. To address this, we propose an attention enhancement method called Temporally Conditioned Attention Sharpening (TCAS), which constructs an enhancement objective based on attention distinctions to enhance the model's temporal resolution capability, thereby improving its temporal understanding logic consistency. Experimental results demonstrate that our method significantly enhances the temporal logic consistency of Video-LLMs. Further interpretability analyses reveal that our method indeed improves the temporal discriminability of attention heads, validating our conclusions. Additionally, our method achieves performance improvements in general video temporal grounding tasks, highlighting that temporal logic consistency is a bottleneck in temporal understanding. By enhancing consistency, our method drives significant progress in video temporal understanding.