Fact-Checking with Contextual Narratives: Leveraging Retrieval-Augmented LLMs for Social Media Analysis

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the proliferation of misinformation on social media and the challenges posed by heterogeneous, contradictory multimodal evidence (textual and visual), this paper proposes CRAVE, a retrieval-augmented large language model framework for fact-checking. CRAVE introduces a novel hierarchical clustering–based retrieval-augmented verification paradigm: it jointly embeds cross-modal evidence via alignment-aware encoding, clusters multi-source evidence to construct coherent narratives, and employs an LLM-based adjudicator to assess veracity. Crucially, it incorporates a proxy-driven iterative refinement mechanism to balance evidential consistency and diversity. The framework automatically generates interpretable, auditable verdicts. Extensive evaluation across multiple benchmarks demonstrates that CRAVE significantly outperforms state-of-the-art methods in retrieval precision, clustering quality, and verdict accuracy—validating its robustness, interpretability, and practical utility as a decision-support tool for automated fact-checking.

Technology Category

Application Category

📝 Abstract
We propose CRAVE (Cluster-based Retrieval Augmented Verification with Explanation); a novel framework that integrates retrieval-augmented Large Language Models (LLMs) with clustering techniques to address fact-checking challenges on social media. CRAVE automatically retrieves multimodal evidence from diverse, often contradictory, sources. Evidence is clustered into coherent narratives, and evaluated via an LLM-based judge to deliver fact-checking verdicts explained by evidence summaries. By synthesizing evidence from both text and image modalities and incorporating agent-based refinement, CRAVE ensures consistency and diversity in evidence representation. Comprehensive experiments demonstrate CRAVE's efficacy in retrieval precision, clustering quality, and judgment accuracy, showcasing its potential as a robust decision-support tool for fact-checkers.
Problem

Research questions and friction points this paper is trying to address.

Addressing fact-checking challenges on social media
Retrieving multimodal evidence from diverse sources
Ensuring consistency and diversity in evidence representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates retrieval-augmented LLMs with clustering techniques
Retrieves multimodal evidence from diverse contradictory sources
Uses LLM-based judge for fact-checking verdicts with summaries
🔎 Similar Papers
No similar papers found.
A
Arka Ujjal Dey
University of Surrey, United Kingdom
M
Muhammad Junaid Awan
University of Surrey, United Kingdom
G
Georgia Channing
University of Oxford, United Kingdom
C
Christian Schröder de Witt
University of Oxford, United Kingdom
John Collomosse
John Collomosse
Sr. Principal Scientist, Adobe Research. Professor of AI & Computer Vision, University of Surrey.
Content AuthenticityComputer VisionDLT/BlockchainArtificial Intelligence