What's Left Unsaid? Detecting and Correcting Misleading Omissions in Multimodal News Previews

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the subtle yet pervasive issue of omission-based misinformation in social media news previews—typically image-caption pairs—where critical contextual information is omitted, leading to misleading impressions. While prior work has largely overlooked systematic modeling of such omissive deception, this paper introduces a novel multi-stage comprehension simulation pipeline, establishes the MM-Misleading benchmark, and proposes OMGuard, a framework integrating explanation-aware fine-tuning with rationale-guided caption rewriting for interpretable detection and correction. Experiments demonstrate that OMGuard enables an 8B-parameter model to achieve detection accuracy on par with a 235B multimodal large language model and substantially improves end-to-end correction performance. Furthermore, the findings reveal that text-only correction strategies are insufficient for image-driven misinformation, underscoring the necessity of explicit visual intervention.

Technology Category

Application Category

📝 Abstract
Even when factually correct, social-media news previews (image-headline pairs) can induce interpretation drift: by selectively omitting crucial context, they lead readers to form judgments that diverge from what the full article conveys. This covert harm is harder to detect than explicit misinformation yet remains underexplored. To address this gap, we develop a multi-stage pipeline that disentangles and simulates preview-based versus context-based understanding, enabling construction of the MM-Misleading benchmark. Using this benchmark, we systematically evaluate open-source LVLMs and uncover pronounced blind spots to omission-based misleadingness detection. We further propose OMGuard, which integrates (1) Interpretation-Aware Fine-Tuning, which used to improve multimodal misleadingness detection and (2) Rationale-Guided Misleading Content Correction, which uses explicit rationales to guide headline rewriting and reduce misleading impressions. Experiments show that OMGuard lifts an 8B model's detection accuracy to match a 235B LVLM and delivers markedly stronger end-to-end correction. Further analysis reveals that misleadingness typically stems from local narrative shifts (e.g., missing background) rather than global frame changes, and identifies image-driven scenarios where text-only correction fails, highlighting the necessity of visual interventions.
Problem

Research questions and friction points this paper is trying to address.

misleading omissions
multimodal news previews
interpretation drift
context omission
covert misinformation
Innovation

Methods, ideas, or system contributions that make the work stand out.

omission-based misleadingness
multimodal news previews
interpretation drift
rationale-guided correction
MM-Misleading benchmark
🔎 Similar Papers
No similar papers found.
F
Fanxiao Li
School of Information Science and Engineering, Yunnan University
Jiaying Wu
Jiaying Wu
National University of Singapore
Natural Language ProcessingData MiningMis/DisinformationSocial Computing
T
Tingchao Fu
School of Information Science and Engineering, Yunnan University
D
Dayang Li
School of Information Science and Engineering, Yunnan University
Herun Wan
Herun Wan
Xi'an Jiaotong University
network analysislarge language model
W
Wei Zhou
Yunnan-Malaya Institute (School of Engineering), Yunnan University
M
Min-Yen Kan
National University of Singapore