🤖 AI Summary
Current video large language models (Video LLMs) exhibit limited temporal reasoning capability due to weak temporal alignment in training data and reliance on next-token prediction. To address this, we propose a novel training framework grounded in Direct Preference Optimization (DPO), featuring two key innovations: (i) a “difficulty-scheduled” curriculum learning strategy and (ii) “pre-instruction fine-tuning alignment”, which injects fine-grained temporal understanding prior to supervised fine-tuning (SFT). We further design an automated pipeline for temporal preference data generation, incorporating video-specific perturbations and temporal-rich sample selection. With only a small amount of self-generated DPO data, our method achieves significant improvements in temporal reasoning across multiple benchmarks. Empirical results confirm both the cross-architecture transferability of DPO data and the critical role of difficulty scheduling in optimization.
📝 Abstract
Video Large Language Models (Video LLMs) have achieved significant success by leveraging a two-stage paradigm: pretraining on large-scale video-text data for vision-language alignment, followed by supervised fine-tuning (SFT) for task-specific capabilities. However, existing approaches struggle with temporal reasoning due to weak temporal correspondence in the data and reliance on the next-token prediction paradigm during training. To address these limitations, we propose TEMPO (TEMporal Preference Optimization), a systematic framework that enhances Video LLMs' temporal reasoning capabilities through Direct Preference Optimization (DPO). To facilitate this, we introduce an automated preference data generation pipeline that systematically constructs preference pairs by selecting videos that are rich in temporal information, designing video-specific perturbation strategies, and finally evaluating model responses on clean and perturbed video inputs. Our temporal alignment features two key innovations: curriculum learning which that progressively increases perturbation difficulty to improve model robustness and adaptability; and ``Pre-SFT Alignment'', applying preference optimization before instruction tuning to prioritize fine-grained temporal comprehension. Extensive experiments demonstrate that our approach consistently improves Video LLM performance across multiple benchmarks with a relatively small set of self-generated DPO data. We further analyze the transferability of DPO data across architectures and the role of difficulty scheduling in optimization. Our findings highlight our TEMPO as a scalable and efficient complement to SFT-based methods, paving the way for developing reliable Video LLMs.