D-ORCA: Dialogue-Centric Optimization for Robust Audio-Visual Captioning

📅 2026-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of accurately identifying and temporally aligning multiple speakers in video conversations by proposing a dialogue-centric multimodal large language model. The authors construct DVD, the first large-scale open-source bilingual multi-speaker video dataset, and introduce speech processing evaluation metrics into the reinforcement learning reward function for the first time. They design a tripartite reward-based group relative policy optimization method to jointly optimize speaker diarization, speech content transcription, and temporal boundary alignment. With 8 billion parameters, the model matches the performance of Qwen3-Omni and significantly outperforms existing open-source approaches in speaker identification, speech recognition, and temporal localization tasks, while also achieving strong results across multiple general audio-visual understanding benchmarks.

Technology Category

Application Category

📝 Abstract
Spoken dialogue is a primary source of information in videos; therefore, accurately identifying who spoke what and when is essential for deep video understanding. We introduce D-ORCA, a \textbf{d}ialogue-centric \textbf{o}mni-modal large language model optimized for \textbf{r}obust audio-visual \textbf{ca}ptioning. We further curate DVD, a large-scale, high-quality bilingual dataset comprising nearly 40,000 multi-party dialogue videos for training and 2000 videos for evaluation in English and Mandarin, addressing a critical gap in the open-source ecosystem. To ensure fine-grained captioning accuracy, we adopt group relative policy optimization with three novel reward functions that assess speaker attribution accuracy, global speech content accuracy, and sentence-level temporal boundary alignment. These rewards are derived from evaluation metrics widely used in speech processing and, to our knowledge, are applied for the first time as reinforcement learning objectives for audio-visual captioning. Extensive experiments demonstrate that D-ORCA substantially outperforms existing open-source models in speaker identification, speech recognition, and temporal grounding. Notably, despite having only 8 billion parameters, D-ORCA achieves performance competitive with Qwen3-Omni across several general-purpose audio-visual understanding benchmarks. Demos are available at \href{https://d-orca-llm.github.io/}{https://d-orca-llm.github.io/}. Our code, data, and checkpoints will be available at \href{https://github.com/WeChatCV/D-ORCA/}{https://github.com/WeChatCV/D-ORCA/}.
Problem

Research questions and friction points this paper is trying to address.

audio-visual captioning
speaker attribution
multi-party dialogue
temporal grounding
speech recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

dialogue-centric modeling
audio-visual captioning
reinforcement learning with speech rewards
multi-party dialogue dataset
temporal grounding
🔎 Similar Papers
No similar papers found.