🤖 AI Summary
This study addresses the subtle yet pervasive issue of omission-based misinformation in social media news previews—typically image-caption pairs—where critical contextual information is omitted, leading to misleading impressions. While prior work has largely overlooked systematic modeling of such omissive deception, this paper introduces a novel multi-stage comprehension simulation pipeline, establishes the MM-Misleading benchmark, and proposes OMGuard, a framework integrating explanation-aware fine-tuning with rationale-guided caption rewriting for interpretable detection and correction. Experiments demonstrate that OMGuard enables an 8B-parameter model to achieve detection accuracy on par with a 235B multimodal large language model and substantially improves end-to-end correction performance. Furthermore, the findings reveal that text-only correction strategies are insufficient for image-driven misinformation, underscoring the necessity of explicit visual intervention.
📝 Abstract
Even when factually correct, social-media news previews (image-headline pairs) can induce interpretation drift: by selectively omitting crucial context, they lead readers to form judgments that diverge from what the full article conveys. This covert harm is harder to detect than explicit misinformation yet remains underexplored. To address this gap, we develop a multi-stage pipeline that disentangles and simulates preview-based versus context-based understanding, enabling construction of the MM-Misleading benchmark. Using this benchmark, we systematically evaluate open-source LVLMs and uncover pronounced blind spots to omission-based misleadingness detection. We further propose OMGuard, which integrates (1) Interpretation-Aware Fine-Tuning, which used to improve multimodal misleadingness detection and (2) Rationale-Guided Misleading Content Correction, which uses explicit rationales to guide headline rewriting and reduce misleading impressions. Experiments show that OMGuard lifts an 8B model's detection accuracy to match a 235B LVLM and delivers markedly stronger end-to-end correction. Further analysis reveals that misleadingness typically stems from local narrative shifts (e.g., missing background) rather than global frame changes, and identifies image-driven scenarios where text-only correction fails, highlighting the necessity of visual interventions.