Principled Detection of Hallucinations in Large Language Models via Multiple Testing

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Hallucination detection in large language models (LLMs) lacks theoretical foundations and robustness. Method: This paper formalizes hallucination identification as a statistical hypothesis testing problem, introducing multiple testing theory—specifically Bonferroni-Holm correction—to this domain for the first time, and rigorously establishes its equivalence to out-of-distribution (OOD) detection. The proposed framework leverages confidence-based analysis to yield a theoretically sound, statistically rigorous hallucination detector. Contribution/Results: Extensive experiments demonstrate that our method consistently outperforms state-of-the-art approaches across diverse LLMs, tasks, and noisy settings, achieving higher detection accuracy and superior robustness. By grounding hallucination detection in classical statistical theory, this work establishes a novel, principled paradigm for trustworthy LLM evaluation.

Technology Category

Application Category

📝 Abstract
While Large Language Models (LLMs) have emerged as powerful foundational models to solve a variety of tasks, they have also been shown to be prone to hallucinations, i.e., generating responses that sound confident but are actually incorrect or even nonsensical. In this work, we formulate the problem of detecting hallucinations as a hypothesis testing problem and draw parallels to the problem of out-of-distribution detection in machine learning models. We propose a multiple-testing-inspired method to solve the hallucination detection problem, and provide extensive experimental results to validate the robustness of our approach against state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Detecting hallucinations in large language models
Formulating hallucination detection as hypothesis testing
Providing robust method against state-of-the-art approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multiple testing method for hallucination detection
Formulates detection as hypothesis testing problem
Validates robustness against state-of-the-art methods
🔎 Similar Papers
No similar papers found.