🤖 AI Summary
Existing audio-visual (AV) forgery detection benchmarks are limited to DeepFake-style manipulations and coarse-grained annotations, failing to reflect the diversity and complexity of real-world scenarios. To address this, we introduce AVFakeBench—the first multimodal AV forgery detection benchmark covering both human and non-human subjects. It comprises 12K high-quality samples, seven distinct forgery categories, and four-level fine-grained annotations, enabling comprehensive evaluation across binary classification, forgery-type identification, spatial-temporal localization, and logical reasoning. We propose a novel multi-stage hybrid forgery generation framework integrating task planning and expert models, and establish a hierarchical evaluation protocol tailored for audio-visual large language models (AV-LMMs). Leveraging multimodal understanding, fine-grained contrastive learning, causal reasoning, and semantic consistency modeling, we evaluate 11 AV-LMMs and two classes of detection methods. Results reveal their strong potential yet critical weaknesses in fine-grained perception and logical inference, establishing AVFakeBench as a rigorous, authoritative benchmark for future research.
📝 Abstract
The threat of Audio-Video (AV) forgery is rapidly evolving beyond human-centric deepfakes to include more diverse manipulations across complex natural scenes. However, existing benchmarks are still confined to DeepFake-based forgeries and single-granularity annotations, thus failing to capture the diversity and complexity of real-world forgery scenarios. To address this, we introduce AVFakeBench, the first comprehensive audio-video forgery detection benchmark that spans rich forgery semantics across both human subject and general subject. AVFakeBench comprises 12K carefully curated audio-video questions, covering seven forgery types and four levels of annotations. To ensure high-quality and diverse forgeries, we propose a multi-stage hybrid forgery framework that integrates proprietary models for task planning with expert generative models for precise manipulation. The benchmark establishes a multi-task evaluation framework covering binary judgment, forgery types classification, forgery detail selection, and explanatory reasoning. We evaluate 11 Audio-Video Large Language Models (AV-LMMs) and 2 prevalent detection methods on AVFakeBench, demonstrating the potential of AV-LMMs as emerging forgery detectors while revealing their notable weaknesses in fine-grained perception and reasoning.