OphthBench: A Comprehensive Benchmark for Evaluating Large Language Models in Chinese Ophthalmology

📅 2025-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language models (LLMs) lack systematic, clinically grounded evaluation for Chinese ophthalmology applications. Method: We introduce OphthBench—the first comprehensive, real-world clinical workflow-oriented benchmark for Chinese ophthalmology—covering five core clinical stages (education, triage, diagnosis, treatment, and prognosis) and nine task types. It comprises 591 expert-annotated, high-quality Chinese questions, incorporating multiple question formats (multiple-choice, short-answer, and case-based analysis). We propose a clinical-stage–partitioned multi-phase evaluation framework and conduct the first specialized assessment of 39 mainstream LLMs. Results: Through standardized scoring and fine-grained error attribution, we identify pervasive deficiencies—including weak logical reasoning, medical terminology misuse, and insufficient evidence-based reasoning—highlighting critical usability bottlenecks. OphthBench establishes a reproducible, domain-specific evaluation paradigm and provides a foundational benchmark for targeted optimization of Chinese medical LLMs.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown significant promise across various medical applications, with ophthalmology being a notable area of focus. Many ophthalmic tasks have shown substantial improvement through the integration of LLMs. However, before these models can be widely adopted in clinical practice, evaluating their capabilities and identifying their limitations is crucial. To address this research gap and support the real-world application of LLMs, we introduce the OphthBench, a specialized benchmark designed to assess LLM performance within the context of Chinese ophthalmic practices. This benchmark systematically divides a typical ophthalmic clinical workflow into five key scenarios: Education, Triage, Diagnosis, Treatment, and Prognosis. For each scenario, we developed multiple tasks featuring diverse question types, resulting in a comprehensive benchmark comprising 9 tasks and 591 questions. This comprehensive framework allows for a thorough assessment of LLMs' capabilities and provides insights into their practical application in Chinese ophthalmology. Using this benchmark, we conducted extensive experiments and analyzed the results from 39 popular LLMs. Our evaluation highlights the current gap between LLM development and its practical utility in clinical settings, providing a clear direction for future advancements. By bridging this gap, we aim to unlock the potential of LLMs and advance their development in ophthalmology.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Ophthalmology
Medical Diagnosis Support
Innovation

Methods, ideas, or system contributions that make the work stand out.

OphthBench Test
Large Language Models
Ophthalmic Medicine Assessment
C
Chengfeng Zhou
Changsha Aier Eye Hospital, Changsha, China
J
Ji Wang
Changsha Aier Eye Hospital, Changsha, China
J
Juanjuan Qin
Changsha Aier Eye Hospital, Changsha, China
Y
Yining Wang
Changsha Aier Eye Hospital, Changsha, China
Ling Sun
Ling Sun
Changsha Aier Eye Hospital, Changsha, China
Weiwei Dai
Weiwei Dai
Aier Eye Hospital Group