π€ AI Summary
Existing text-to-audiovisual generation models lack a unified, fine-grained evaluation framework, making it difficult to assess multimodal semantic consistency in real-world scenarios. To address this gap, this work proposes the first task-oriented, multi-granularity benchmark, comprising 11 categories of high-quality, realistic prompts, and introduces an automatic evaluation framework that integrates lightweight expert models with multimodal large language models (MLLMs) to comprehensively assess generation qualityβfrom perceptual fidelity to semantic controllability. Experimental results reveal that while current models exhibit strong audiovisual aesthetics, they suffer from systematic deficiencies in critical dimensions such as text rendering, speech coherence, physical reasoning, and musical pitch control, highlighting significant shortcomings in semantic reliability.
π Abstract
Text-to-Audio-Video (T2AV) generation is rapidly becoming a core interface for media creation, yet its evaluation remains fragmented. Existing benchmarks largely assess audio and video in isolation or rely on coarse embedding similarity, failing to capture the fine-grained joint correctness required by realistic prompts. We introduce AVGen-Bench, a task-driven benchmark for T2AV generation featuring high-quality prompts across 11 real-world categories. To support comprehensive assessment, we propose a multi-granular evaluation framework that combines lightweight specialist models with Multimodal Large Language Models (MLLMs), enabling evaluation from perceptual quality to fine-grained semantic controllability. Our evaluation reveals a pronounced gap between strong audio-visual aesthetics and weak semantic reliability, including persistent failures in text rendering, speech coherence, physical reasoning, and a universal breakdown in musical pitch control. Code and benchmark resources are available at http://aka.ms/avgenbench.