🤖 AI Summary
Current evaluations of hallucinations in large language models predominantly emphasize factual correctness while overlooking output consistency, leading to an underestimation of potential risks. This work proposes a "prompt diversity" framework that, for the first time, integrates consistency into hallucination assessment by quantifying output inconsistency across diverse prompts on benchmarks such as Med-HALT. The study reveals that mainstream models exhibit inconsistent outputs in over 50% of scenarios, demonstrating that prevailing hallucination detection and mitigation approaches—such as retrieval-augmented generation (RAG)—primarily reflect consistency rather than correctness and may even introduce new inconsistencies. This research establishes a novel paradigm and provides empirical evidence for more comprehensive hallucination evaluation and mitigation strategies.
📝 Abstract
Large language models (LLMs) are known to"hallucinate"by generating false or misleading outputs. Hallucinations pose various harms, from erosion of trust to widespread misinformation. Existing hallucination evaluation, however, focuses only on correctness and often overlooks consistency, necessary to distinguish and address these harms. To bridge this gap, we introduce prompt multiplicity, a framework for quantifying consistency in LLM evaluations. Our analysis reveals significant multiplicity (over 50% inconsistency in benchmarks like Med-HALT), suggesting that hallucination-related harms have been severely misunderstood. Furthermore, we study the role of consistency in hallucination detection and mitigation. We find that: (a) detection techniques detect consistency, not correctness, and (b) mitigation techniques like RAG, while beneficial, can introduce additional inconsistencies. By integrating prompt multiplicity into hallucination evaluation, we provide an improved framework of potential harms and uncover critical limitations in current detection and mitigation strategies.