🤖 AI Summary
This work addresses the challenge of detecting semantically coherent yet visually deceptive multimodal misinformation generated by multimodal large language models (MLLMs). We identify two key bottlenecks: (1) existing methods underestimate the risk of dynamic, MLLM-generated deceptive narratives, focusing instead on rule-based textual manipulation; and (2) reliance on manually induced misalignment artifacts lacks realism and semantic consistency. To bridge this gap, we introduce the first MLLM-driven Synthetic Multimodal (MDSM) dataset and propose the Artifact-aware Manipulation Diagnosis (AMD) framework. AMD innovates with artifact pre-perception encoding and manipulation-oriented reasoning, integrating controllable image editing, MLLM-driven text generation, and multi-stage vision–language alignment modeling. Experiments demonstrate that AMD significantly improves detection accuracy for high-fidelity MLLM deception and achieves strong generalization across misinformation types and MLLM architectures.
📝 Abstract
The detection and grounding of multimedia manipulation has emerged as a critical challenge in combating AI-generated disinformation. While existing methods have made progress in recent years, we identify two fundamental limitations in current approaches: (1) Underestimation of MLLM-driven deception risk: prevailing techniques primarily address rule-based text manipulations, yet fail to account for sophisticated misinformation synthesized by multimodal large language models (MLLMs) that can dynamically generate semantically coherent, contextually plausible yet deceptive narratives conditioned on manipulated images; (2) Unrealistic misalignment artifacts: currently focused scenarios rely on artificially misaligned content that lacks semantic coherence, rendering them easily detectable. To address these gaps holistically, we propose a new adversarial pipeline that leverages MLLMs to generate high-risk disinformation. Our approach begins with constructing the MLLM-Driven Synthetic Multimodal (MDSM) dataset, where images are first altered using state-of-the-art editing techniques and then paired with MLLM-generated deceptive texts that maintain semantic consistency with the visual manipulations. Building upon this foundation, we present the Artifact-aware Manipulation Diagnosis via MLLM (AMD) framework featuring two key innovations: Artifact Pre-perception Encoding strategy and Manipulation-Oriented Reasoning, to tame MLLMs for the MDSM problem. Comprehensive experiments validate our framework's superior generalization capabilities as a unified architecture for detecting MLLM-powered multimodal deceptions.