Towards Unified Multimodal Misinformation Detection in Social Media: A Benchmark Dataset and Baseline

πŸ“… 2025-09-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing methods typically model manually crafted or AI-generated multimodal misinformation separately, rendering them inadequate for real-world scenarios involving heterogeneous, unknown-type hybrid threats. Method: We propose UMFDet, a unified multimodal forgery detection framework, and introduce OmniFakeβ€”the first large-scale benchmark (127K samples) covering dual-source (human- and AI-generated) fake content. UMFDet unifies modeling of both forgery types via a category-aware Mixture-of-Experts Adapter (MoE-Adapter) for dynamic feature adaptation, and incorporates an Attribution Chain mechanism to enable interpretable, stepwise reasoning. Built upon vision-language models, it jointly optimizes detection and attribution. Contribution/Results: UMFDet achieves state-of-the-art performance across cross-type detection tasks, significantly outperforming specialized baselines. It demonstrates superior robustness against distribution shifts and strong generalization to unseen forgery types, establishing a new paradigm for scalable, explainable multimodal misinformation governance.

Technology Category

Application Category

πŸ“ Abstract
In recent years, detecting fake multimodal content on social media has drawn increasing attention. Two major forms of deception dominate: human-crafted misinformation (e.g., rumors and misleading posts) and AI-generated content produced by image synthesis models or vision-language models (VLMs). Although both share deceptive intent, they are typically studied in isolation. NLP research focuses on human-written misinformation, while the CV community targets AI-generated artifacts. As a result, existing models are often specialized for only one type of fake content. In real-world scenarios, however, the type of a multimodal post is usually unknown, limiting the effectiveness of such specialized systems. To bridge this gap, we construct the Omnibus Dataset for Multimodal News Deception (OmniFake), a comprehensive benchmark of 127K samples that integrates human-curated misinformation from existing resources with newly synthesized AI-generated examples. Based on this dataset, we propose Unified Multimodal Fake Content Detection (UMFDet), a framework designed to handle both forms of deception. UMFDet leverages a VLM backbone augmented with a Category-aware Mixture-of-Experts (MoE) Adapter to capture category-specific cues, and an attribution chain-of-thought mechanism that provides implicit reasoning guidance for locating salient deceptive signals. Extensive experiments demonstrate that UMFDet achieves robust and consistent performance across both misinformation types, outperforming specialized baselines and offering a practical solution for real-world multimodal deception detection.
Problem

Research questions and friction points this paper is trying to address.

Detects both human-crafted and AI-generated multimodal misinformation
Unifies isolated detection approaches into a single framework
Addresses real-world scenarios with unknown deception types
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified multimodal framework detects both human and AI deception
Category-aware Mixture-of-Experts adapter captures category-specific cues
Attribution chain-of-thought mechanism locates salient deceptive signals
πŸ”Ž Similar Papers
No similar papers found.