TRUST-VL: An Explainable News Assistant for General Multimodal Misinformation Detection

📅 2025-09-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for misinformation detection typically target single distortion types, exhibiting limited generalizability and poor interpretability. Method: This paper proposes TRUST-VL, a universal multimodal misinformation detection framework built upon vision-language pretraining. It introduces a question-aware visual enhancement module and a structured reasoning chain instruction-tuning mechanism. To enable cross-task knowledge transfer, we formulate the inter-task knowledge sharing hypothesis and construct TRUST-Instruct—a large-scale, cross-modal instruction dataset comprising 198K samples—supporting joint detection and interpretable reasoning for textual, visual, and cross-modal distortions. Contribution/Results: TRUST-VL achieves state-of-the-art performance on both in-domain and zero-shot benchmarks, significantly improving model generalizability across distortion types and enhancing decision transparency through faithful, stepwise reasoning.

Technology Category

Application Category

📝 Abstract
Multimodal misinformation, encompassing textual, visual, and cross-modal distortions, poses an increasing societal threat that is amplified by generative AI. Existing methods typically focus on a single type of distortion and struggle to generalize to unseen scenarios. In this work, we observe that different distortion types share common reasoning capabilities while also requiring task-specific skills. We hypothesize that joint training across distortion types facilitates knowledge sharing and enhances the model's ability to generalize. To this end, we introduce TRUST-VL, a unified and explainable vision-language model for general multimodal misinformation detection. TRUST-VL incorporates a novel Question-Aware Visual Amplifier module, designed to extract task-specific visual features. To support training, we also construct TRUST-Instruct, a large-scale instruction dataset containing 198K samples featuring structured reasoning chains aligned with human fact-checking workflows. Extensive experiments on both in-domain and zero-shot benchmarks demonstrate that TRUST-VL achieves state-of-the-art performance, while also offering strong generalization and interpretability.
Problem

Research questions and friction points this paper is trying to address.

Detecting multimodal misinformation across text, image, and cross-modal distortions
Addressing generalization challenges in unseen misinformation scenarios
Providing explainable detection aligned with human fact-checking workflows
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified vision-language model for multimodal misinformation detection
Question-Aware Visual Amplifier extracts task-specific visual features
Large-scale instruction dataset with structured reasoning chains
🔎 Similar Papers
No similar papers found.
Z
Zehong Yan
National University of Singapore
P
Peng Qi
National University of Singapore
W
Wynne Hsu
National University of Singapore
Mong Li Lee
Mong Li Lee
Professor of Computer Science, National University of Singapore
Database systemsData managementData analytics