π€ AI Summary
Existing methods typically model manually crafted or AI-generated multimodal misinformation separately, rendering them inadequate for real-world scenarios involving heterogeneous, unknown-type hybrid threats. Method: We propose UMFDet, a unified multimodal forgery detection framework, and introduce OmniFakeβthe first large-scale benchmark (127K samples) covering dual-source (human- and AI-generated) fake content. UMFDet unifies modeling of both forgery types via a category-aware Mixture-of-Experts Adapter (MoE-Adapter) for dynamic feature adaptation, and incorporates an Attribution Chain mechanism to enable interpretable, stepwise reasoning. Built upon vision-language models, it jointly optimizes detection and attribution. Contribution/Results: UMFDet achieves state-of-the-art performance across cross-type detection tasks, significantly outperforming specialized baselines. It demonstrates superior robustness against distribution shifts and strong generalization to unseen forgery types, establishing a new paradigm for scalable, explainable multimodal misinformation governance.
π Abstract
In recent years, detecting fake multimodal content on social media has drawn increasing attention. Two major forms of deception dominate: human-crafted misinformation (e.g., rumors and misleading posts) and AI-generated content produced by image synthesis models or vision-language models (VLMs). Although both share deceptive intent, they are typically studied in isolation. NLP research focuses on human-written misinformation, while the CV community targets AI-generated artifacts. As a result, existing models are often specialized for only one type of fake content. In real-world scenarios, however, the type of a multimodal post is usually unknown, limiting the effectiveness of such specialized systems. To bridge this gap, we construct the Omnibus Dataset for Multimodal News Deception (OmniFake), a comprehensive benchmark of 127K samples that integrates human-curated misinformation from existing resources with newly synthesized AI-generated examples. Based on this dataset, we propose Unified Multimodal Fake Content Detection (UMFDet), a framework designed to handle both forms of deception. UMFDet leverages a VLM backbone augmented with a Category-aware Mixture-of-Experts (MoE) Adapter to capture category-specific cues, and an attribution chain-of-thought mechanism that provides implicit reasoning guidance for locating salient deceptive signals. Extensive experiments demonstrate that UMFDet achieves robust and consistent performance across both misinformation types, outperforming specialized baselines and offering a practical solution for real-world multimodal deception detection.