🤖 AI Summary
Current AIGC detection benchmarks are largely limited to single-modal or face-specific deepfakes, lacking comprehensive, general-purpose multimodal datasets covering diverse visual styles (realistic vs. anime), content categories (persons, animals, objects, scenes), and audio-visual modality combinations—hindering the development of trustworthy detection systems. To address this gap, we introduce MVAD, the first large-scale, general-purpose audio-visual AIGC detection dataset. MVAD systematically defines and constructs three realistic audio-visual forgery paradigms: cross-modal generation, audio-driven talking-head synthesis, and text-to-video generation—spanning four visual content types and four audio-visual modality configurations. All samples are synthesized using state-of-the-art generative models, rigorously curated by human experts, and evaluated across multiple quality dimensions to ensure high fidelity and semantic diversity. MVAD bridges the critical data void in non-face-centric multimodal AIGC detection and advances AIGC authentication from unimodal to general multimodal paradigms.
📝 Abstract
The rapid advancement of AI-generated multimodal video-audio content has raised significant concerns regarding information security and content authenticity. Existing synthetic video datasets predominantly focus on the visual modality alone, while the few incorporating audio are largely confined to facial deepfakes--a limitation that fails to address the expanding landscape of general multimodal AI-generated content and substantially impedes the development of trustworthy detection systems. To bridge this critical gap, we introduce the Multimodal Video-Audio Dataset (MVAD), the first comprehensive dataset specifically designed for detecting AI-generated multimodal video-audio content. Our dataset exhibits three key characteristics: (1) genuine multimodality with samples generated according to three realistic video-audio forgery patterns; (2) high perceptual quality achieved through diverse state-of-the-art generative models; and (3) comprehensive diversity spanning realistic and anime visual styles, four content categories (humans, animals, objects, and scenes), and four video-audio multimodal data types. Our dataset will be available at https://github.com/HuMengXue0104/MVAD.