🤖 AI Summary
This work addresses the persistent challenge of hallucinations in large language models within retrieval-augmented generation (RAG), where existing approaches lack fine-grained and interpretable detection capabilities. The authors propose RT4CHART, a novel framework that introduces, for the first time, a hierarchical, claim-level retroactive verification mechanism. It decomposes model outputs into atomic claims and validates each against the retrieved context by classifying their relationship as entailment, contradiction, or unsupported, while explicitly extracting supporting evidence. Evaluated on RAGTruth++, RT4CHART achieves an F1 score of 0.776, outperforming the strongest baseline by 83%; on RAGTruth-Enhance, it attains a span-level F1 of 47.5%. Re-annotation reveals that current benchmarks underestimate hallucination rates by up to 1.68×, underscoring the framework’s superior capability in fine-grained hallucination detection and interpretability.
📝 Abstract
Large language models (LLMs) continue to hallucinate in retrieval-augmented generation (RAG), producing claims that are unsupported by or conflict with the retrieved context. Detecting such errors remains challenging when faithfulness is evaluated solely with respect to the retrieved context. Existing approaches either provide coarse-grained, answer-level scores or focus on open-domain factuality, often lacking fine-grained, evidence-grounded diagnostics.
We present RT4CHART, a retromorphic testing framework for context-faithfulness assessment. RT4CHART decomposes model outputs into independently verifiable claims and performs hierarchical, local-to-global verification against the retrieved context. Each claim is assigned one of three labels: entailed, contradicted, or baseless. Furthermore, RT4CHART maps claim-level decisions back to specific answer spans and retrieves explicit supporting or refuting evidence from the context, enabling fine-grained and interpretable auditing.
We evaluate RT4CHART on RAGTruth++ (408 samples) and RAGTruth-Enhance (2,675 samples), a newly re-annotated benchmark. RT4CHART achieves the best answer-level hallucination detection F1 among all baselines. On RAGTruth++, it reaches an F1 score of 0.776, outperforming the strongest baseline by 83%. On RAGTruth-Enhance, it achieves a span-level F1 of 47.5%.
Ablation studies show that the hierarchical verification design is the primary driver of performance gains. Finally, our re-annotation reveals 1.68x more hallucination cases than the original labels, suggesting that existing benchmarks substantially underestimate the prevalence of hallucinations.