🤖 AI Summary
Medical large language models (MLLMs) suffer from pervasive hallucinations in ophthalmic diagnosis—stemming from knowledge gaps, visual localization inaccuracies, and insufficient multi-step reasoning—while existing benchmarks lack fine-grained hallucination evaluation and mitigation mechanisms. To address this, we propose the first systematic “task–error” dual-dimensional taxonomy for ophthalmic MLLM hallucinations; introduce EH-Benchmark, a dedicated evaluation suite covering two representative hallucination types: visual understanding failures and logical composition errors; and design a three-layer, traceable, multi-agent reasoning framework comprising knowledge retrieval, case-driven reasoning, and result verification to enable staged error correction and semantic–visual alignment. Experiments demonstrate significant reductions in hallucination rates, alongside improved diagnostic accuracy, interpretability, and clinical reliability—establishing the first evaluable and traceable hallucination governance framework for ophthalmic AI.
📝 Abstract
Medical Large Language Models (MLLMs) play a crucial role in ophthalmic diagnosis, holding significant potential to address vision-threatening diseases. However, their accuracy is constrained by hallucinations stemming from limited ophthalmic knowledge, insufficient visual localization and reasoning capabilities, and a scarcity of multimodal ophthalmic data, which collectively impede precise lesion detection and disease diagnosis. Furthermore, existing medical benchmarks fail to effectively evaluate various types of hallucinations or provide actionable solutions to mitigate them. To address the above challenges, we introduce EH-Benchmark, a novel ophthalmology benchmark designed to evaluate hallucinations in MLLMs. We categorize MLLMs' hallucinations based on specific tasks and error types into two primary classes: Visual Understanding and Logical Composition, each comprising multiple subclasses. Given that MLLMs predominantly rely on language-based reasoning rather than visual processing, we propose an agent-centric, three-phase framework, including the Knowledge-Level Retrieval stage, the Task-Level Case Studies stage, and the Result-Level Validation stage. Experimental results show that our multi-agent framework significantly mitigates both types of hallucinations, enhancing accuracy, interpretability, and reliability. Our project is available at https://github.com/ppxy1/EH-Benchmark.