HART: Data-Driven Hallucination Attribution and Evidence-Based Tracing for Large Language Models

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models are prone to generating hallucinations, yet existing approaches lack fine-grained, structured modeling of the relationships among hallucination types, generation mechanisms, and external evidence, resulting in limited interpretability and traceability. This work proposes HART, a novel framework that formalizes hallucination tracing as a four-stage structured task encompassing span localization, mechanism attribution, evidence retrieval, and causal tracing. The authors also construct the first fine-grained dataset jointly annotated with hallucination types, error mechanisms, and counterfactual evidence. An end-to-end system built upon the HART framework significantly outperforms strong baselines such as BM25 and DPR on this dataset, demonstrating the effectiveness and generalizability of the proposed paradigm in hallucination attribution and evidence alignment.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated remarkable performance in text generation and knowledge-intensive question answering. Nevertheless, they are prone to producing hallucinated content, which severely undermines their reliability in high-stakes application domains. Existing hallucination attribution approaches, based on either external knowledge retrieval or internal model mechanisms, primarily focus on semantic similarity matching or representation-level discrimination. As a result, they have difficulty establishing structured correspondences at the span level between hallucination types, underlying error generation mechanisms, and external factual evidence, thereby limiting the interpretability of hallucinated fragments and the traceability of supporting or opposing evidence. To address these limitations, we propose HART, a fine-grained hallucination attribution and evidence retrieval framework for large language models. HART formalizes hallucination tracing as a structured modeling task comprising four stages: span localization, mechanism attribution, evidence retrieval, and causal tracing. Based upon this formulation, we develop the first structured dataset tailored for hallucination tracing, in which hallucination types, error mechanisms, and sets of counterfactual evidence are jointly annotated to enable causal-level interpretability evaluation. Experimental results on the proposed dataset demonstrate that HART substantially outperforms strong retrieval baselines, including BM25 and DPR, validating the effectiveness and generalization capability of the proposed tracing paradigm for hallucination analysis and evidence alignment.
Problem

Research questions and friction points this paper is trying to address.

hallucination
large language models
evidence tracing
interpretability
attribution
Innovation

Methods, ideas, or system contributions that make the work stand out.

hallucination attribution
evidence retrieval
structured tracing
causal interpretability
large language models
🔎 Similar Papers
No similar papers found.
S
Shize Liang
Faculty of Computing, Harbin Institute of Technology
Hongzhi Wang
Hongzhi Wang
IBM Almaden Research Center
Medical Image Analysis