SynPO: Synergizing Descriptiveness and Preference Optimization for Video Detailed Captioning

๐Ÿ“… 2025-06-01
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing fine-grained video captioning methods struggle to model subtle dynamics and rich visual details. This paper proposes SynPO, an end-to-end framework that synergistically integrates descriptive modeling with preference optimization. First, we design a reference-free, negative-sample-agnostic collaborative preference optimization mechanism that jointly enhances language generation capability and temporal consistency. Second, we introduce a low-cost, high-quality dynamic preference-pair generation pipeline, leveraging both the intrinsic perceptual capabilities of vision-language models (VLMs) and semantic guidance from large language models (LLMs). Third, we achieve efficient training via joint visionโ€“language fine-tuning and gradient-based optimization. SynPO significantly outperforms DPO-based methods on VDC, VDD, and VATEX benchmarks, improves training efficiency by 20%, and demonstrates strong generalizability and robustness across diverse downstream tasks.

Technology Category

Application Category

๐Ÿ“ Abstract
Fine-grained video captioning aims to generate detailed, temporally coherent descriptions of video content. However, existing methods struggle to capture subtle video dynamics and rich detailed information. In this paper, we leverage preference learning to enhance the performance of vision-language models in fine-grained video captioning, while mitigating several limitations inherent to direct preference optimization (DPO). First, we propose a pipeline for constructing preference pairs that leverages the intrinsic properties of VLMs along with partial assistance from large language models, achieving an optimal balance between cost and data quality. Second, we propose Synergistic Preference Optimization (SynPO), a novel optimization method offering significant advantages over DPO and its variants. SynPO prevents negative preferences from dominating the optimization, explicitly preserves the model's language capability to avoid deviation of the optimization objective, and improves training efficiency by eliminating the need for the reference model. We extensively evaluate SynPO not only on video captioning benchmarks (e.g., VDC, VDD, VATEX) but also across well-established NLP tasks, including general language understanding and preference evaluation, using diverse pretrained models. Results demonstrate that SynPO consistently outperforms DPO variants while achieving 20% improvement in training efficiency. Code is available at https://github.com/longmalongma/SynPO
Problem

Research questions and friction points this paper is trying to address.

Enhancing fine-grained video captioning with preference learning
Balancing cost and quality in preference pair construction
Improving training efficiency and language capability preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

SynPO optimizes video captioning with preference learning
Constructs cost-effective preference pairs using VLMs and LLMs
Enhances training efficiency by eliminating reference model
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Jisheng Dang
Sun Yat-Sen University, National University of Singapore
Y
Yizhou Zhang
Lanzhou University
H
Hao Ye
Lanzhou University
T
Teng Wang
The University of Hong Kong
S
Siming Chen
Lanzhou University
H
Huicheng Zheng
Sun Yat-Sen University
Yulan Guo
Yulan Guo
Professor, Sun Yat-sen University
3D VisionMachine LearningRobotics
Jianhuang Lai
Jianhuang Lai
Sun Yat-sen University
B
Bin Hu
Beijing Institute of Technology