DiFaR: Enhancing Multimodal Misinformation Detection with Diverse, Factual, and Relevant Rationales

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large vision-language models (LVLMs) generate rationales for multimodal misinformation detection that suffer from three key limitations: insufficient diversity, poor factual consistency (i.e., severe hallucination), and weak relevance (e.g., inclusion of irrelevant or contradictory content). To address these issues, we propose DiFaR—a detector-agnostic framework that jointly optimizes rationale diversity, factuality, and relevance via five chain-of-thought prompting strategies and a lightweight sentence-level post-processing module. Our core innovation lies in a dual-dimension scoring mechanism—evaluating both factuality and relevance—to guide sentence selection, coupled with end-to-end co-optimization of prompting strategies and filtering. Experiments across four mainstream benchmarks demonstrate that DiFaR outperforms four categories of baselines by up to 5.9% in rationale quality and boosts the performance of existing detectors by up to 8.7%, significantly enhancing both rationale fidelity and detection robustness.

Technology Category

Application Category

📝 Abstract
Generating textual rationales from large vision-language models (LVLMs) to support trainable multimodal misinformation detectors has emerged as a promising paradigm. However, its effectiveness is fundamentally limited by three core challenges: (i) insufficient diversity in generated rationales, (ii) factual inaccuracies due to hallucinations, and (iii) irrelevant or conflicting content that introduces noise. We introduce DiFaR, a detector-agnostic framework that produces diverse, factual, and relevant rationales to enhance misinformation detection. DiFaR employs five chain-of-thought prompts to elicit varied reasoning traces from LVLMs and incorporates a lightweight post-hoc filtering module to select rationale sentences based on sentence-level factuality and relevance scores. Extensive experiments on four popular benchmarks demonstrate that DiFaR outperforms four baseline categories by up to 5.9% and boosts existing detectors by as much as 8.7%. Both automatic metrics and human evaluations confirm that DiFaR significantly improves rationale quality across all three dimensions.
Problem

Research questions and friction points this paper is trying to address.

Addressing insufficient diversity in generated rationales
Mitigating factual inaccuracies from model hallucinations
Reducing irrelevant or conflicting content introducing noise
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chain-of-thought prompts for diverse reasoning
Lightweight filtering module for factuality
Sentence-level scoring for relevance selection
🔎 Similar Papers
No similar papers found.