🤖 AI Summary
Large language models (LLMs) frequently generate hallucinated answers in open-domain question answering, and existing detection methods primarily rely on internal generation signals (e.g., entropy or self-consistency), overlooking the potential of pretraining data’s lexical coverage over questions and answers as a complementary diagnostic signal.
Method: This paper introduces “training-data vocabulary coverage” as a novel hallucination detection dimension. We construct a suffix array over the RedPajama corpus to enable efficient n-gram retrieval and exposure quantification at the trillion-token scale, then integrate term-frequency features with generative log-probabilities for multi-source signal modeling.
Contribution/Results: We provide the first systematic validation of vocabulary coverage’s efficacy for hallucination detection. Although this feature exhibits limited standalone discriminative power, it significantly improves hallucination identification—particularly under high uncertainty. All code and tools are publicly released.
📝 Abstract
Hallucination in large language models (LLMs) is a fundamental challenge, particularly in open-domain question answering. Prior work attempts to detect hallucination with model-internal signals such as token-level entropy or generation consistency, while the connection between pretraining data exposure and hallucination is underexplored. Existing studies show that LLMs underperform on long-tail knowledge, i.e., the accuracy of the generated answer drops for the ground-truth entities that are rare in pretraining. However, examining whether data coverage itself can serve as a detection signal is overlooked. We propose a complementary question: Does lexical training-data coverage of the question and/or generated answer provide additional signal for hallucination detection? To investigate this, we construct scalable suffix arrays over RedPajama's 1.3-trillion-token pretraining corpus to retrieve $n$-gram statistics for both prompts and model generations. We evaluate their effectiveness for hallucination detection across three QA benchmarks. Our observations show that while occurrence-based features are weak predictors when used alone, they yield modest gains when combined with log-probabilities, particularly on datasets with higher intrinsic model uncertainty. These findings suggest that lexical coverage features provide a complementary signal for hallucination detection. All code and suffix-array infrastructure are provided at https://github.com/WWWonderer/ostd.