🤖 AI Summary
Existing multimodal large language models (MLLMs) lack fine-grained evaluation of audio-visual alignment—specifically, the “who said what and when” capability—in video understanding. Method: We introduce AV-SpeakerBench, the first speaker-centric video benchmark for reasoning about speaker identity, spoken content, and precise temporal localization. It comprises 3,212 multiple-choice questions and establishes an evaluation framework where the speaker serves as the fundamental reasoning unit. Questions are semantically designed to jointly encode audio-visual dependencies, with expert annotations ensuring millisecond-level temporal accuracy and cross-modal consistency. Contribution/Results: Experiments show Gemini 2.5 Pro achieves the highest performance; open-weight Qwen3-Omni-30B approaches Gemini 2.0 Flash, revealing that the bottleneck lies in audio-visual fusion—not visual perception. This work provides the first systematic assessment of speaker-level audio-visual comprehension in MLLMs, advancing research on fine-grained multimodal alignment.
📝 Abstract
Multimodal large language models (MLLMs) are expected to jointly interpret vision, audio, and language, yet existing video benchmarks rarely assess fine-grained reasoning about human speech. Many tasks remain visually solvable or only coarsely evaluate speech, offering limited insight into whether models can align who speaks, what is said, and when it occurs. We introduce AV-SpeakerBench, a curated benchmark of 3,212 multiple-choice questions focused on speaker-centric audiovisual reasoning in real-world videos. It features: (1) a speaker-centered formulation that treats speakers-not scenes-as the core reasoning unit; (2) fusion-grounded question design embedding audiovisual dependencies into question semantics; and (3) expert-curated annotations ensuring temporal precision and cross-modal validity. Comprehensive evaluations show that the Gemini family consistently outperforms open-source systems, with Gemini 2.5 Pro achieving the best results. Among open models, Qwen3-Omni-30B approaches Gemini 2.0 Flash but remains far behind Gemini 2.5 Pro, primarily due to weaker audiovisual fusion rather than visual perception. We believe AV-SpeakerBench establishes a rigorous foundation for advancing fine-grained audiovisual reasoning in future multimodal systems.