Fact or Fake? Assessing the Role of Deepfake Detectors in Multimodal Misinformation Detection

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing deepfake detectors, which focus on pixel-level artifacts and struggle with image-text aligned semantic misinformation. The study systematically evaluates their utility in multimodal fact-checking and proposes a reasoning framework that integrates evidence retrieval with Multi-Agent Debate (MAD). This framework employs Monte Carlo Tree Search (MCTS) to guide external evidence acquisition and combines deliberative multi-agent reasoning, incorporating mainstream deepfake detectors as auxiliary modules. Experiments reveal that standalone detectors achieve F1 scores of only 0.26–0.53, and their integration into the pipeline further degrades performance by 0.04–0.08. In contrast, the evidence-driven approach attains F1 scores of 0.81 and 0.55 on MMFakeBench and DGM4, respectively, demonstrating for the first time that pixel-level signals offer limited utility and may even harm performance due to non-causal assumptions, thereby underscoring the critical role of semantic understanding and external evidence.

Technology Category

Application Category

📝 Abstract
In multimodal misinformation, deception usually arises not just from pixel-level manipulations in an image, but from the semantic and contextual claim jointly expressed by the image-text pair. Yet most deepfake detectors, engineered to detect pixel-level forgeries, do not account for claim-level meaning, despite their growing integration in automated fact-checking (AFC) pipelines. This raises a central scientific and practical question: Do pixel-level detectors contribute useful signal for verifying image-text claims, or do they instead introduce misleading authenticity priors that undermine evidence-based reasoning? We provide the first systematic analysis of deepfake detectors in the context of multimodal misinformation detection. Using two complementary benchmarks, MMFakeBench and DGM4, we evaluate: (1) state-of-the-art image-only deepfake detectors, (2) an evidence-driven fact-checking system that performs tool-guided retrieval via Monte Carlo Tree Search (MCTS) and engages in deliberative inference through Multi-Agent Debate (MAD), and (3) a hybrid fact-checking system that injects detector outputs as auxiliary evidence. Results across both benchmark datasets show that deepfake detectors offer limited standalone value, achieving F1 scores in the range of 0.26-0.53 on MMFakeBench and 0.33-0.49 on DGM4, and that incorporating their predictions into fact-checking pipelines consistently reduces performance by 0.04-0.08 F1 due to non-causal authenticity assumptions. In contrast, the evidence-centric fact-checking system achieves the highest performance, reaching F1 scores of approximately 0.81 on MMFakeBench and 0.55 on DGM4. Overall, our findings demonstrate that multimodal claim verification is driven primarily by semantic understanding and external evidence, and that pixel-level artifact signals do not reliably enhance reasoning over real-world image-text misinformation.
Problem

Research questions and friction points this paper is trying to address.

multimodal misinformation
deepfake detection
fact-checking
image-text claims
authenticity priors
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal misinformation
deepfake detection
evidence-based fact-checking
Monte Carlo Tree Search
Multi-Agent Debate
🔎 Similar Papers
No similar papers found.
S
Sharifuzzaman Sagar
The University of Western Australia, Perth, Australia
Mohammed Bennamoun
Mohammed Bennamoun
Winthrop Professor - University of Western Australia
Artificial IntelligenceComputer VisionDeep LearningFace RecognitionBiometrics
F
F. Boussaid
The University of Western Australia, Perth, Australia
N
Naeha Sharif
The University of Western Australia, Perth, Australia
L
Lian Xu
The University of Western Australia, Perth, Australia
S
Shaaban A. Sahmoud
Fatih Sultan Mehmet Vakif University, Istanbul, Turkey
A
Ali Kishk
Aljazeera Media Network Investigative Department, Doha, Qatar