🤖 AI Summary
This work addresses the issue of citation hallucination in retrieval-augmented generation (RAG) systems—where models erroneously cite sources that do not support their claims—by proposing the FACTUM framework. FACTUM introduces four mechanistic scoring metrics that quantify the contributions of attention and feedforward network pathways to generated content and assess their alignment with retrieved evidence. This approach reveals, for the first time, a dynamic relationship between citation accuracy and model scale, challenging the prevailing assumption that hallucinations stem solely from overreliance on parametric knowledge. Leveraging interpretability-driven analyses, the proposed scoring system achieves up to a 37.5% improvement in AUC on citation faithfulness detection tasks, substantially outperforming current state-of-the-art methods.
📝 Abstract
Retrieval-Augmented Generation (RAG) models are critically undermined by citation hallucinations, a deceptive failure where a model cites a source that fails to support its claim. While existing work attributes hallucination to a simple over-reliance on parametric knowledge, we reframe this failure as an evolving, scale-dependent coordination failure between the Attention (reading) and Feed-Forward Network (recalling) pathways. We introduce FACTUM (Framework for Attesting Citation Trustworthiness via Underlying Mechanisms), a framework of four mechanistic scores: Contextual Alignment (CAS), Attention Sink Usage (BAS), Parametric Force (PFS), and Pathway Alignment (PAS). Our analysis reveals that correct citations are consistently marked by higher parametric force (PFS) and greater use of the attention sink (BAS) for information synthesis. Crucially, we find that"one-size-fits-all"theories are insufficient as the signature of correctness evolves with scale: while the 3B model relies on high pathway alignment (PAS), our best-performing 8B detector identifies a shift toward a specialized strategy where pathways provide distinct, orthogonal information. By capturing this complex interplay, FACTUM outperforms state-of-the-art baselines by up to 37.5% in AUC. Our results demonstrate that high parametric force is constructive when successfully coordinated with the Attention pathway, paving the way for more nuanced and reliable RAG systems.