🤖 AI Summary
To address the challenge of dynamically aligning video multimodal large language models (video MLLMs) with human preferences regarding spatiotemporal focus in descriptions, this paper proposes a direct preference optimization (DPO)-based post-training framework that requires no human annotations. The method constructs preference signals from response pairs generated by the same base model under contrasting prompts—specifically, enhanced prompts explicitly encoding temporal dynamics and spatial details—to enable fine-grained alignment with human preferences. Its core contributions are threefold: (1) it is the first to apply DPO to video captioning, circumventing the reliance on scarce high-quality human-annotated data inherent in supervised fine-tuning; (2) it enables decoupled optimization of spatiotemporal attention mechanisms; and (3) it achieves state-of-the-art performance on the LOVE@CVPR’25 Workshop Track 1A Fine-Grained Video Description Challenge, ranking first on the VDC benchmark with a significantly higher VDCScore than competing approaches.
📝 Abstract
Although video multimodal large language models (video MLLMs) have achieved substantial progress in video captioning tasks, it remains challenging to adjust the focal emphasis of video captions according to human preferences. To address this limitation, we propose Aligned Video Captioning via Direct Preference Optimization (AVC-DPO), a post-training framework designed to enhance captioning capabilities in video MLLMs through preference alignment. Our approach designs enhanced prompts that specifically target temporal dynamics and spatial information-two key factors that humans care about when watching a video-thereby incorporating human-centric preferences. AVC-DPO leverages the same foundation model's caption generation responses under varied prompt conditions to conduct preference-aware training and caption alignment. Using this framework, we have achieved exceptional performance in the LOVE@CVPR'25 Workshop Track 1A: Video Detailed Captioning Challenge, achieving first place on the Video Detailed Captioning (VDC) benchmark according to the VDCSCORE evaluation metric.