Why Language Models Hallucinate

📅 2025-09-04
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate plausible yet factually incorrect “hallucinations” under uncertainty, undermining system trustworthiness. This stems from a systemic bias induced jointly by pretraining objectives and evaluation paradigms: models are implicitly incentivized to guess rather than abstain honestly—statistically manifesting as systematic error in binary classification tasks, and socio-technically rooted in mainstream benchmarks (e.g., MMLU, TruthfulQA) that reward guessing over truthful refusal. We provide the first dual-perspective analysis—statistical and socio-technical—of how hallucinations evolve throughout training. Building on this, we propose a foundational reform of evaluation: explicitly incorporating “abstention accuracy” into benchmark scoring. Empirical results demonstrate that merely revising scoring rules—without modifying model architecture or training—significantly reduces hallucination rates. Our approach offers a practical, system-level pathway toward building more reliable and trustworthy AI.

Technology Category

Application Category

📝 Abstract
Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such "hallucinations" persist even in state-of-the-art systems and undermine trust. We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern training pipeline. Hallucinations need not be mysterious -- they originate simply as errors in binary classification. If incorrect statements cannot be distinguished from facts, then hallucinations in pretrained language models will arise through natural statistical pressures. We then argue that hallucinations persist due to the way most evaluations are graded -- language models are optimized to be good test-takers, and guessing when uncertain improves test performance. This "epidemic" of penalizing uncertain responses can only be addressed through a socio-technical mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate leaderboards, rather than introducing additional hallucination evaluations. This change may steer the field toward more trustworthy AI systems.
Problem

Research questions and friction points this paper is trying to address.

Language models produce plausible but incorrect statements when uncertain
Training procedures reward guessing over acknowledging uncertainty
Evaluation metrics penalize uncertainty and encourage hallucination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Binary classification errors cause hallucinations
Modify scoring benchmarks to reduce guessing
Address training-evaluation misalignment via socio-technical mitigation
🔎 Similar Papers
No similar papers found.