๐ค AI Summary
Existing fine-grained video captioning methods struggle to model subtle dynamics and rich visual details. This paper proposes SynPO, an end-to-end framework that synergistically integrates descriptive modeling with preference optimization. First, we design a reference-free, negative-sample-agnostic collaborative preference optimization mechanism that jointly enhances language generation capability and temporal consistency. Second, we introduce a low-cost, high-quality dynamic preference-pair generation pipeline, leveraging both the intrinsic perceptual capabilities of vision-language models (VLMs) and semantic guidance from large language models (LLMs). Third, we achieve efficient training via joint visionโlanguage fine-tuning and gradient-based optimization. SynPO significantly outperforms DPO-based methods on VDC, VDD, and VATEX benchmarks, improves training efficiency by 20%, and demonstrates strong generalizability and robustness across diverse downstream tasks.
๐ Abstract
Fine-grained video captioning aims to generate detailed, temporally coherent descriptions of video content. However, existing methods struggle to capture subtle video dynamics and rich detailed information. In this paper, we leverage preference learning to enhance the performance of vision-language models in fine-grained video captioning, while mitigating several limitations inherent to direct preference optimization (DPO). First, we propose a pipeline for constructing preference pairs that leverages the intrinsic properties of VLMs along with partial assistance from large language models, achieving an optimal balance between cost and data quality. Second, we propose Synergistic Preference Optimization (SynPO), a novel optimization method offering significant advantages over DPO and its variants. SynPO prevents negative preferences from dominating the optimization, explicitly preserves the model's language capability to avoid deviation of the optimization objective, and improves training efficiency by eliminating the need for the reference model. We extensively evaluate SynPO not only on video captioning benchmarks (e.g., VDC, VDD, VATEX) but also across well-established NLP tasks, including general language understanding and preference evaluation, using diverse pretrained models. Results demonstrate that SynPO consistently outperforms DPO variants while achieving 20% improvement in training efficiency. Code is available at https://github.com/longmalongma/SynPO