🤖 AI Summary
Medical large language models (LLMs) frequently generate factual hallucinations, posing serious risks to patient safety—necessitating a dedicated evaluation benchmark. To address this, we introduce MedHallu, the first hallucination detection benchmark for healthcare, comprising 10,000 high-quality question-answer pairs derived from PubMedQA and augmented via a controlled hallucination generation pipeline. We formally define and quantify the medical hallucination detection task; propose bidirectional entailment clustering to uncover the correlation between hallucination difficulty and semantic proximity; and design a three-class (Yes/No/Unsure) classification framework incorporating an “Uncertain” option and domain-knowledge enhancement. Experiments show that state-of-the-art models achieve only 0.625 F1 on the challenging subset, while our approach improves F1 and accuracy by up to 38% over baselines—significantly advancing clinical factual error identification.
📝 Abstract
Advancements in Large Language Models (LLMs) and their increasing use in medical question-answering necessitate rigorous evaluation of their reliability. A critical challenge lies in hallucination, where models generate plausible yet factually incorrect outputs. In the medical domain, this poses serious risks to patient safety and clinical decision-making. To address this, we introduce MedHallu, the first benchmark specifically designed for medical hallucination detection. MedHallu comprises 10,000 high-quality question-answer pairs derived from PubMedQA, with hallucinated answers systematically generated through a controlled pipeline. Our experiments show that state-of-the-art LLMs, including GPT-4o, Llama-3.1, and the medically fine-tuned UltraMedical, struggle with this binary hallucination detection task, with the best model achieving an F1 score as low as 0.625 for detecting"hard"category hallucinations. Using bidirectional entailment clustering, we show that harder-to-detect hallucinations are semantically closer to ground truth. Through experiments, we also show incorporating domain-specific knowledge and introducing a"not sure"category as one of the answer categories improves the precision and F1 scores by up to 38% relative to baselines.