AVoCaDO: An Audiovisual Video Captioner Driven by Temporal Orchestration

📅 2025-10-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the dual challenges of semantic richness and insufficient temporal alignment between audiovisual events in audiovisual captioning. We propose a two-stage post-training framework: (1) supervised fine-tuning (SFT) on a high-quality temporally aligned audiovisual dataset, and (2) GRPO-based reinforcement learning guided by a custom reward function that jointly optimizes temporal coherence, caption accuracy, and length control. Our approach effectively mitigates generation collapse while significantly improving cross-modal temporal consistency and semantic fidelity of generated captions. Extensive experiments demonstrate state-of-the-art performance across four audiovisual captioning benchmarks—surpassing all existing open-source models. Notably, our method also achieves leading results on vision-only benchmarks (VDC and DREAM-1K), underscoring its strong cross-modal collaborative modeling capability and generalizability beyond audiovisual inputs.

Technology Category

Application Category

📝 Abstract
Audiovisual video captioning aims to generate semantically rich descriptions with temporal alignment between visual and auditory events, thereby benefiting both video understanding and generation. In this paper, we present AVoCaDO, a powerful audiovisual video captioner driven by the temporal orchestration between audio and visual modalities. We propose a two-stage post-training pipeline: (1) AVoCaDO SFT, which fine-tunes the model on a newly curated dataset of 107K high-quality, temporally-aligned audiovisual captions; and (2) AVoCaDO GRPO, which leverages tailored reward functions to further enhance temporal coherence and dialogue accuracy while regularizing caption length and reducing collapse. Experimental results demonstrate that AVoCaDO significantly outperforms existing open-source models across four audiovisual video captioning benchmarks, and also achieves competitive performance on the VDC and DREAM-1K benchmark under visual-only settings.
Problem

Research questions and friction points this paper is trying to address.

Generating temporally aligned audiovisual video captions
Enhancing temporal coherence and dialogue accuracy
Outperforming existing models on captioning benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage post-training pipeline for alignment
Fine-tuning with curated audiovisual caption dataset
Leveraging tailored reward functions for coherence
🔎 Similar Papers
X
Xinlong Chen
Kling Team, Kuaishou Technology
Y
Yue Ding
New Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA)
Weihong Lin
Weihong Lin
Kuaishou
Vision Language ModelComputer VisionDocument Intelligence
Jingyun Hua
Jingyun Hua
Kuaishou
Natural Language ProcessingLarge Language Model
Linli Yao
Linli Yao
Peking University
multi-modal semantic understanding
Y
Yang Shi
Peking University
B
Bozhou Li
Kling Team, Kuaishou Technology
Yuanxing Zhang
Yuanxing Zhang
Kuaishou Technology
Recommender SystemLarge Language ModelVideo Understanding
Q
Qiang Liu
New Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA)
Pengfei Wan
Pengfei Wan
Head of Kling Video Generation Models, Kuaishou Technology
Generative ModelsComputer VisionMultimodal AIComputer Graphics
L
Liang Wang
New Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA)
Tieniu Tan
Tieniu Tan
Institute of Automation, Chinese Academy of Sciences