🤖 AI Summary
This work addresses the challenge of accurately identifying and temporally aligning multiple speakers in video conversations by proposing a dialogue-centric multimodal large language model. The authors construct DVD, the first large-scale open-source bilingual multi-speaker video dataset, and introduce speech processing evaluation metrics into the reinforcement learning reward function for the first time. They design a tripartite reward-based group relative policy optimization method to jointly optimize speaker diarization, speech content transcription, and temporal boundary alignment. With 8 billion parameters, the model matches the performance of Qwen3-Omni and significantly outperforms existing open-source approaches in speaker identification, speech recognition, and temporal localization tasks, while also achieving strong results across multiple general audio-visual understanding benchmarks.
📝 Abstract
Spoken dialogue is a primary source of information in videos; therefore, accurately identifying who spoke what and when is essential for deep video understanding. We introduce D-ORCA, a \textbf{d}ialogue-centric \textbf{o}mni-modal large language model optimized for \textbf{r}obust audio-visual \textbf{ca}ptioning. We further curate DVD, a large-scale, high-quality bilingual dataset comprising nearly 40,000 multi-party dialogue videos for training and 2000 videos for evaluation in English and Mandarin, addressing a critical gap in the open-source ecosystem. To ensure fine-grained captioning accuracy, we adopt group relative policy optimization with three novel reward functions that assess speaker attribution accuracy, global speech content accuracy, and sentence-level temporal boundary alignment. These rewards are derived from evaluation metrics widely used in speech processing and, to our knowledge, are applied for the first time as reinforcement learning objectives for audio-visual captioning. Extensive experiments demonstrate that D-ORCA substantially outperforms existing open-source models in speaker identification, speech recognition, and temporal grounding. Notably, despite having only 8 billion parameters, D-ORCA achieves performance competitive with Qwen3-Omni across several general-purpose audio-visual understanding benchmarks. Demos are available at \href{https://d-orca-llm.github.io/}{https://d-orca-llm.github.io/}. Our code, data, and checkpoints will be available at \href{https://github.com/WeChatCV/D-ORCA/}{https://github.com/WeChatCV/D-ORCA/}.