๐ค AI Summary
In RAG systems, hallucination detection often conflates *factual correctness* with *faithfulness to retrieved context*, erroneously flagging factually correct yet unretrieved-supported outputs as hallucinations. To address this, we propose FRANQโa novel framework that decouples these two dimensions for the first time. FRANQ introduces a faithfulness-aware dual-path uncertainty quantification mechanism, integrating confidence scores, token-level entropy, self-verification, and retrieval alignment for multi-granular estimation. We also construct the first long-form QA benchmark with synchronized human annotations for both *factuality* and *faithfulness*, generated via an automated annotation plus expert verification pipeline. Extensive experiments across multiple LLMs and datasets demonstrate that FRANQ improves F1 for factual error detection by 12.6% on average, significantly enhancing both hallucination detection accuracy and interpretability.
๐ Abstract
Large Language Models (LLMs) enhanced with external knowledge retrieval, an approach known as Retrieval-Augmented Generation (RAG), have shown strong performance in open-domain question answering. However, RAG systems remain susceptible to hallucinations: factually incorrect outputs that may arise either from inconsistencies in the model's internal knowledge or incorrect use of the retrieved context. Existing approaches often conflate factuality with faithfulness to the retrieved context, misclassifying factually correct statements as hallucinations if they are not directly supported by the retrieval. In this paper, we introduce FRANQ (Faithfulness-based Retrieval Augmented UNcertainty Quantification), a novel method for hallucination detection in RAG outputs. FRANQ applies different Uncertainty Quantification (UQ) techniques to estimate factuality based on whether a statement is faithful to the retrieved context or not. To evaluate FRANQ and other UQ techniques for RAG, we present a new long-form Question Answering (QA) dataset annotated for both factuality and faithfulness, combining automated labeling with manual validation of challenging examples. Extensive experiments on long- and short-form QA across multiple datasets and LLMs show that FRANQ achieves more accurate detection of factual errors in RAG-generated responses compared to existing methods.