FACTUM: Mechanistic Detection of Citation Hallucination in Long-Form RAG

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of citation hallucination in retrieval-augmented generation (RAG) systems—where models erroneously cite sources that do not support their claims—by proposing the FACTUM framework. FACTUM introduces four mechanistic scoring metrics that quantify the contributions of attention and feedforward network pathways to generated content and assess their alignment with retrieved evidence. This approach reveals, for the first time, a dynamic relationship between citation accuracy and model scale, challenging the prevailing assumption that hallucinations stem solely from overreliance on parametric knowledge. Leveraging interpretability-driven analyses, the proposed scoring system achieves up to a 37.5% improvement in AUC on citation faithfulness detection tasks, substantially outperforming current state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Retrieval-Augmented Generation (RAG) models are critically undermined by citation hallucinations, a deceptive failure where a model cites a source that fails to support its claim. While existing work attributes hallucination to a simple over-reliance on parametric knowledge, we reframe this failure as an evolving, scale-dependent coordination failure between the Attention (reading) and Feed-Forward Network (recalling) pathways. We introduce FACTUM (Framework for Attesting Citation Trustworthiness via Underlying Mechanisms), a framework of four mechanistic scores: Contextual Alignment (CAS), Attention Sink Usage (BAS), Parametric Force (PFS), and Pathway Alignment (PAS). Our analysis reveals that correct citations are consistently marked by higher parametric force (PFS) and greater use of the attention sink (BAS) for information synthesis. Crucially, we find that"one-size-fits-all"theories are insufficient as the signature of correctness evolves with scale: while the 3B model relies on high pathway alignment (PAS), our best-performing 8B detector identifies a shift toward a specialized strategy where pathways provide distinct, orthogonal information. By capturing this complex interplay, FACTUM outperforms state-of-the-art baselines by up to 37.5% in AUC. Our results demonstrate that high parametric force is constructive when successfully coordinated with the Attention pathway, paving the way for more nuanced and reliable RAG systems.
Problem

Research questions and friction points this paper is trying to address.

citation hallucination
Retrieval-Augmented Generation
RAG
hallucination detection
trustworthiness
Innovation

Methods, ideas, or system contributions that make the work stand out.

citation hallucination
mechanistic interpretability
Retrieval-Augmented Generation
pathway alignment
model scaling
🔎 Similar Papers
No similar papers found.
M
Maxime Dassen
University of Amsterdam, Amsterdam, The Netherlands
R
Rebecca Kotula
Department of Defense, Washington D.C., USA
Kenton Murray
Kenton Murray
Research Scientist, Johns Hopkins
Machine LearningNatural Language ProcessingMachine TranslationSemanticsNeural Networks
Andrew Yates
Andrew Yates
Johns Hopkins University, Human Language Technology Center of Excellence
Information RetrievalNLPAI
D
Dawn J. Lawrie
HLTCOE, Johns Hopkins University, Baltimore, Maryland, USA
E
Efsun Kayi
Applied Physics Laboratory, Johns Hopkins University, Laurel, Maryland, USA
James Mayfield
James Mayfield
Johns Hopkins University
information retrievalinformation extractionhuman language technologies
Kevin Duh
Kevin Duh
Johns Hopkins University
Natural Language ProcessingMachine Learning