Harnessing Synthetic Preference Data for Enhancing Temporal Understanding of Video-LLMs

📅 2025-10-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video-LLMs exhibit limited capability in fine-grained temporal understanding tasks—e.g., temporal logical reasoning—primarily due to insufficient visual complexity and imprecise temporal annotations in existing fine-tuning datasets, causing overreliance on linguistic priors rather than faithful video dynamics modeling. To address this, we propose TimeWarp: the first synthetic preference data generation framework explicitly designed for temporal understanding. It leverages controllable video editing transformations and temporal perturbations to synthesize large-scale, high-fidelity preference pairs, enabling contrastive learning over temporal logic and frame-level spatiotemporal alignment. Our method substantially enhances model sensitivity to temporal causality, ordering fidelity, and dynamic evolution. Evaluated across seven authoritative temporal reasoning benchmarks, TimeWarp delivers consistent and significant absolute performance gains, demonstrating both the efficacy and generalizability of synthetic preference data for advancing video temporal understanding.

Technology Category

Application Category

📝 Abstract
While Video Large Language Models (Video-LLMs) have demonstrated remarkable performance across general video understanding benchmarks-particularly in video captioning and descriptive tasks-they consistently underperform on tasks that require fine-grained temporal understanding. This limitation arises due to the lack of visual complexity and temporal nuance in current fine-tuning datasets, leading these models to rely heavily on language-based reasoning rather than truly understanding video dynamics. In this work, we propose TimeWarp, a systematic method to create a targeted synthetic temporal dataset to fine-tune the model's responses to encourage it to focus on the given input video. We introduce a large-scale preference dataset, created using TimeWarp, that captures intricate temporal dynamics often overlooked, grounding the model's responses to visual and temporal information. We demonstrate that when our method is applied to existing models, it significantly improves performance on temporal understanding benchmarks, highlighting the effectiveness of our proposed datasets in advancing temporal understanding in Video-LLMs, resulting in an absolute improvement in performance across seven benchmarks. Code is available at https://github.com/sameepv21/timewarp.
Problem

Research questions and friction points this paper is trying to address.

Video-LLMs lack fine-grained temporal understanding capabilities
Models rely on language reasoning over video dynamics
Current datasets lack visual complexity and temporal nuance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates synthetic preference data for temporal understanding
Fine-tunes Video-LLMs to focus on visual-temporal information
Improves model performance on temporal benchmarks significantly
🔎 Similar Papers
2024-08-08International Journal of Computer VisionCitations: 13