MTAVG-Bench: A Comprehensive Benchmark for Evaluating Multi-Talker Dialogue-Centric Audio-Video Generation

πŸ“… 2026-01-31
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing evaluation benchmarks struggle to effectively assess critical issues in multi-speaker conversational video generation, such as identity drift, unnatural turn-taking, and audio-visual asynchrony. This work proposes the first fine-grained audiovisual generation evaluation framework tailored to this scenario, introducing a comprehensive benchmark comprising 1.8K videos and 2.4K structured question-answer pairs, constructed via a semi-automatic pipeline. The framework evaluates models across four dimensions: audiovisual fidelity, temporal consistency, social interaction coherence, and cinematic expressiveness, enabling precise failure analysis and targeted model refinement. Experiments on twelve leading open- and closed-source models reveal that Gemini 3 Pro achieves the best overall performance, while certain open-source models excel in signal fidelity and temporal consistency.

Technology Category

Application Category

πŸ“ Abstract
Recent advances in text-to-audio-video (T2AV) generation have enabled models to synthesize audio-visual videos with multi-participant dialogues. However, existing evaluation benchmarks remain largely designed for human-recorded videos or single-speaker settings. As a result, potential errors that occur in generated multi-talker dialogue videos, such as identity drift, unnatural turn transitions, and audio-visual misalignment, cannot be effectively captured and analyzed. To address this issue, we introduce MTAVG-Bench, a benchmark for evaluating audio-visual multi-speaker dialogue generation. MTAVG-Bench is built via a semi-automatic pipeline, where 1.8k videos are generated using multiple popular models with carefully designed prompts, yielding 2.4k manually annotated QA pairs. The benchmark evaluates multi-speaker dialogue generation at four levels: audio-visual signal fidelity, temporal attribute consistency, social interaction, and cinematic expression. We benchmark 12 proprietary and open-source omni-models on MTAVG-Bench, with Gemini 3 Pro achieving the strongest overall performance, while leading open-source models remain competitive in signal fidelity and consistency. Overall, MTAVG-Bench enables fine-grained failure analysis for rigorous model comparison and targeted video generation refinement.
Problem

Research questions and friction points this paper is trying to address.

multi-talker dialogue
audio-visual generation
evaluation benchmark
identity drift
audio-visual misalignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-talker dialogue
audio-visual generation
evaluation benchmark
temporal consistency
social interaction modeling
πŸ”Ž Similar Papers
2024-05-22Annual Meeting of the Association for Computational LinguisticsCitations: 2
Y
Yang-Hao Zhou
Beijing Institute of Technology
H
Haitian Li
Shanghai University
R
Rexar Lin
Beijing Institute of Technology
H
Heyan Huang
Beijing Institute of Technology
J
Jinxing Zhou
OpenNLP Lab
C
Changsen Yuan
Beijing University of Technology
Tian Lan
Tian Lan
εŒ—δΊ¬η†ε·₯ε€§ε­¦
Large Language ModelEvaluation and Critique AbilityText GenerationMulti-Modal
Z
Ziqin Zhou
The University of Adelaide
Y
Yudong Li
Tsinghua University
J
Jiajun Xu
Inkeverse Group Limited
J
Jingyun Liao
Inkeverse Group Limited
Y
Yi-Ming Cheng
Tsinghua University
X
Xuefeng Chen
Inkeverse Group Limited
Xian-Ling Mao
Xian-Ling Mao
Beijing Institute of Technology
Web Data MiningInformation ExtractionQA & DialogueTopic ModelingLearn to Hashing
Y
Yousheng Feng
Inkeverse Group Limited