🤖 AI Summary
To address weak domain generalization, poor interpretability, and hallucination issues in multimodal fake news detection, this paper proposes an LLM-SLM collaborative multi-agent framework. Methodologically, it introduces the first evidence-aware multi-Persona agent architecture, integrating reverse image search, knowledge graph path reasoning, and persuasion strategy analysis; designs a credibility fusion mechanism combining semantic similarity, domain-specific reliability, and temporal context; and incorporates an SLM-based complementary classifier to mitigate LLM hallucination. Experimentally, the framework achieves state-of-the-art performance across three benchmark datasets, with significant improvements in accuracy and F1-score, enhanced robustness against distribution shifts, and transparent, traceable reasoning chains. It further demonstrates strong capability in detecting evolution-based fake news through interpretable, stepwise evidence aggregation.
📝 Abstract
The rapid proliferation of online misinformation poses significant risks to public trust, policy, and safety, necessitating reliable automated fake news detection. Existing methods often struggle with multimodal content, domain generalization, and explainability. We propose AMPEND-LS, an agentic multi-persona evidence-grounded framework with LLM-SLM synergy for multimodal fake news detection. AMPEND-LS integrates textual, visual, and contextual signals through a structured reasoning pipeline powered by LLMs, augmented with reverse image search, knowledge graph paths, and persuasion strategy analysis. To improve reliability, we introduce a credibility fusion mechanism combining semantic similarity, domain trustworthiness, and temporal context, and a complementary SLM classifier to mitigate LLM uncertainty and hallucinations. Extensive experiments across three benchmark datasets demonstrate that AMPEND-LS consistently outperformed state-of-the-art baselines in accuracy, F1 score, and robustness. Qualitative case studies further highlight its transparent reasoning and resilience against evolving misinformation. This work advances the development of adaptive, explainable, and evidence-aware systems for safeguarding online information integrity.