🤖 AI Summary
This work addresses the dual challenges of semantic richness and insufficient temporal alignment between audiovisual events in audiovisual captioning. We propose a two-stage post-training framework: (1) supervised fine-tuning (SFT) on a high-quality temporally aligned audiovisual dataset, and (2) GRPO-based reinforcement learning guided by a custom reward function that jointly optimizes temporal coherence, caption accuracy, and length control. Our approach effectively mitigates generation collapse while significantly improving cross-modal temporal consistency and semantic fidelity of generated captions. Extensive experiments demonstrate state-of-the-art performance across four audiovisual captioning benchmarks—surpassing all existing open-source models. Notably, our method also achieves leading results on vision-only benchmarks (VDC and DREAM-1K), underscoring its strong cross-modal collaborative modeling capability and generalizability beyond audiovisual inputs.
📝 Abstract
Audiovisual video captioning aims to generate semantically rich descriptions with temporal alignment between visual and auditory events, thereby benefiting both video understanding and generation. In this paper, we present AVoCaDO, a powerful audiovisual video captioner driven by the temporal orchestration between audio and visual modalities. We propose a two-stage post-training pipeline: (1) AVoCaDO SFT, which fine-tunes the model on a newly curated dataset of 107K high-quality, temporally-aligned audiovisual captions; and (2) AVoCaDO GRPO, which leverages tailored reward functions to further enhance temporal coherence and dialogue accuracy while regularizing caption length and reducing collapse. Experimental results demonstrate that AVoCaDO significantly outperforms existing open-source models across four audiovisual video captioning benchmarks, and also achieves competitive performance on the VDC and DREAM-1K benchmark under visual-only settings.