Measuring the Impact of Lexical Training Data Coverage on Hallucination Detection in Large Language Models

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate hallucinated answers in open-domain question answering, and existing detection methods primarily rely on internal generation signals (e.g., entropy or self-consistency), overlooking the potential of pretraining data’s lexical coverage over questions and answers as a complementary diagnostic signal. Method: This paper introduces “training-data vocabulary coverage” as a novel hallucination detection dimension. We construct a suffix array over the RedPajama corpus to enable efficient n-gram retrieval and exposure quantification at the trillion-token scale, then integrate term-frequency features with generative log-probabilities for multi-source signal modeling. Contribution/Results: We provide the first systematic validation of vocabulary coverage’s efficacy for hallucination detection. Although this feature exhibits limited standalone discriminative power, it significantly improves hallucination identification—particularly under high uncertainty. All code and tools are publicly released.

Technology Category

Application Category

📝 Abstract
Hallucination in large language models (LLMs) is a fundamental challenge, particularly in open-domain question answering. Prior work attempts to detect hallucination with model-internal signals such as token-level entropy or generation consistency, while the connection between pretraining data exposure and hallucination is underexplored. Existing studies show that LLMs underperform on long-tail knowledge, i.e., the accuracy of the generated answer drops for the ground-truth entities that are rare in pretraining. However, examining whether data coverage itself can serve as a detection signal is overlooked. We propose a complementary question: Does lexical training-data coverage of the question and/or generated answer provide additional signal for hallucination detection? To investigate this, we construct scalable suffix arrays over RedPajama's 1.3-trillion-token pretraining corpus to retrieve $n$-gram statistics for both prompts and model generations. We evaluate their effectiveness for hallucination detection across three QA benchmarks. Our observations show that while occurrence-based features are weak predictors when used alone, they yield modest gains when combined with log-probabilities, particularly on datasets with higher intrinsic model uncertainty. These findings suggest that lexical coverage features provide a complementary signal for hallucination detection. All code and suffix-array infrastructure are provided at https://github.com/WWWonderer/ostd.
Problem

Research questions and friction points this paper is trying to address.

Investigating lexical training data coverage for hallucination detection
Exploring n-gram statistics from pretraining corpus as detection signals
Evaluating data coverage features combined with model probabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using lexical training-data coverage for hallucination detection
Constructing suffix arrays over pretraining corpus
Combining occurrence-based features with log-probabilities
🔎 Similar Papers
No similar papers found.