AVC-DPO: Aligned Video Captioning via Direct Preference Optimization

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of dynamically aligning video multimodal large language models (video MLLMs) with human preferences regarding spatiotemporal focus in descriptions, this paper proposes a direct preference optimization (DPO)-based post-training framework that requires no human annotations. The method constructs preference signals from response pairs generated by the same base model under contrasting prompts—specifically, enhanced prompts explicitly encoding temporal dynamics and spatial details—to enable fine-grained alignment with human preferences. Its core contributions are threefold: (1) it is the first to apply DPO to video captioning, circumventing the reliance on scarce high-quality human-annotated data inherent in supervised fine-tuning; (2) it enables decoupled optimization of spatiotemporal attention mechanisms; and (3) it achieves state-of-the-art performance on the LOVE@CVPR’25 Workshop Track 1A Fine-Grained Video Description Challenge, ranking first on the VDC benchmark with a significantly higher VDCScore than competing approaches.

Technology Category

Application Category

📝 Abstract
Although video multimodal large language models (video MLLMs) have achieved substantial progress in video captioning tasks, it remains challenging to adjust the focal emphasis of video captions according to human preferences. To address this limitation, we propose Aligned Video Captioning via Direct Preference Optimization (AVC-DPO), a post-training framework designed to enhance captioning capabilities in video MLLMs through preference alignment. Our approach designs enhanced prompts that specifically target temporal dynamics and spatial information-two key factors that humans care about when watching a video-thereby incorporating human-centric preferences. AVC-DPO leverages the same foundation model's caption generation responses under varied prompt conditions to conduct preference-aware training and caption alignment. Using this framework, we have achieved exceptional performance in the LOVE@CVPR'25 Workshop Track 1A: Video Detailed Captioning Challenge, achieving first place on the Video Detailed Captioning (VDC) benchmark according to the VDCSCORE evaluation metric.
Problem

Research questions and friction points this paper is trying to address.

Align video captions with human preferences
Enhance temporal and spatial information in captions
Optimize captioning using Direct Preference Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Direct Preference Optimization for alignment
Enhanced prompts targeting temporal dynamics
Preference-aware training with varied prompts
🔎 Similar Papers
No similar papers found.
J
Jiyang Tang
College of Artificial Intelligence, Nankai University
H
Hengyi Li
School of Computer Science and Technology, Beijing Institute of Technology
Yifan Du
Yifan Du
Renmin University of China
Vision Language ModelMLLM
Wayne Xin Zhao
Wayne Xin Zhao
Professor, Renmin University of China
Recommender SystemNatural Language ProcessingLarge Language Model