TEMPO: Temporal Preference Optimization of Video LLMs via Difficulty Scheduling and Pre-SFT Alignment

📅 2025-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current video large language models (Video LLMs) exhibit limited temporal reasoning capability due to weak temporal alignment in training data and reliance on next-token prediction. To address this, we propose a novel training framework grounded in Direct Preference Optimization (DPO), featuring two key innovations: (i) a “difficulty-scheduled” curriculum learning strategy and (ii) “pre-instruction fine-tuning alignment”, which injects fine-grained temporal understanding prior to supervised fine-tuning (SFT). We further design an automated pipeline for temporal preference data generation, incorporating video-specific perturbations and temporal-rich sample selection. With only a small amount of self-generated DPO data, our method achieves significant improvements in temporal reasoning across multiple benchmarks. Empirical results confirm both the cross-architecture transferability of DPO data and the critical role of difficulty scheduling in optimization.

Technology Category

Application Category

📝 Abstract
Video Large Language Models (Video LLMs) have achieved significant success by leveraging a two-stage paradigm: pretraining on large-scale video-text data for vision-language alignment, followed by supervised fine-tuning (SFT) for task-specific capabilities. However, existing approaches struggle with temporal reasoning due to weak temporal correspondence in the data and reliance on the next-token prediction paradigm during training. To address these limitations, we propose TEMPO (TEMporal Preference Optimization), a systematic framework that enhances Video LLMs' temporal reasoning capabilities through Direct Preference Optimization (DPO). To facilitate this, we introduce an automated preference data generation pipeline that systematically constructs preference pairs by selecting videos that are rich in temporal information, designing video-specific perturbation strategies, and finally evaluating model responses on clean and perturbed video inputs. Our temporal alignment features two key innovations: curriculum learning which that progressively increases perturbation difficulty to improve model robustness and adaptability; and ``Pre-SFT Alignment'', applying preference optimization before instruction tuning to prioritize fine-grained temporal comprehension. Extensive experiments demonstrate that our approach consistently improves Video LLM performance across multiple benchmarks with a relatively small set of self-generated DPO data. We further analyze the transferability of DPO data across architectures and the role of difficulty scheduling in optimization. Our findings highlight our TEMPO as a scalable and efficient complement to SFT-based methods, paving the way for developing reliable Video LLMs.
Problem

Research questions and friction points this paper is trying to address.

Enhancing Video LLMs' temporal reasoning capabilities
Addressing weak temporal correspondence in training data
Improving model robustness via curriculum difficulty scheduling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Direct Preference Optimization (DPO) for temporal reasoning
Automated preference data generation with temporal-rich videos
Curriculum learning and Pre-SFT Alignment for robustness
🔎 Similar Papers
No similar papers found.
S
Shicheng Li
National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University
L
Lei Li
The University of Hong Kong
Kun Ouyang
Kun Ouyang
National University of Singapore
human mobilitymachine learning
Shuhuai Ren
Shuhuai Ren
Peking University
Deep LearningNatural Language Processing
Yuanxin Liu
Yuanxin Liu
Peking University
Natural Language Processing
Yuanxing Zhang
Yuanxing Zhang
Kuaishou Technology
Recommender SystemLarge Language ModelVideo Understanding
F
Fuzheng Zhang
Kuaishou Technology
Lingpeng Kong
Lingpeng Kong
Google DeepMind, The University of Hong Kong
Natural Language ProcessingMachine Learning
Q
Qi Liu
The University of Hong Kong
X
Xu Sun
National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University