🤖 AI Summary
Existing large language models (LLMs) lack systematic, clinically grounded evaluation for Chinese ophthalmology applications.
Method: We introduce OphthBench—the first comprehensive, real-world clinical workflow-oriented benchmark for Chinese ophthalmology—covering five core clinical stages (education, triage, diagnosis, treatment, and prognosis) and nine task types. It comprises 591 expert-annotated, high-quality Chinese questions, incorporating multiple question formats (multiple-choice, short-answer, and case-based analysis). We propose a clinical-stage–partitioned multi-phase evaluation framework and conduct the first specialized assessment of 39 mainstream LLMs.
Results: Through standardized scoring and fine-grained error attribution, we identify pervasive deficiencies—including weak logical reasoning, medical terminology misuse, and insufficient evidence-based reasoning—highlighting critical usability bottlenecks. OphthBench establishes a reproducible, domain-specific evaluation paradigm and provides a foundational benchmark for targeted optimization of Chinese medical LLMs.
📝 Abstract
Large language models (LLMs) have shown significant promise across various medical applications, with ophthalmology being a notable area of focus. Many ophthalmic tasks have shown substantial improvement through the integration of LLMs. However, before these models can be widely adopted in clinical practice, evaluating their capabilities and identifying their limitations is crucial. To address this research gap and support the real-world application of LLMs, we introduce the OphthBench, a specialized benchmark designed to assess LLM performance within the context of Chinese ophthalmic practices. This benchmark systematically divides a typical ophthalmic clinical workflow into five key scenarios: Education, Triage, Diagnosis, Treatment, and Prognosis. For each scenario, we developed multiple tasks featuring diverse question types, resulting in a comprehensive benchmark comprising 9 tasks and 591 questions. This comprehensive framework allows for a thorough assessment of LLMs' capabilities and provides insights into their practical application in Chinese ophthalmology. Using this benchmark, we conducted extensive experiments and analyzed the results from 39 popular LLMs. Our evaluation highlights the current gap between LLM development and its practical utility in clinical settings, providing a clear direction for future advancements. By bridging this gap, we aim to unlock the potential of LLMs and advance their development in ophthalmology.