Benchmarking Next-Generation Reasoning-Focused Large Language Models in Ophthalmology: A Head-to-Head Evaluation on 5,888 Items

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Evaluating clinical diagnostic reasoning capabilities of large language models (LLMs) in specialized medical domains remains challenging, particularly for ophthalmology. Method: This study conducts the first systematic assessment of four reasoning-oriented LLMs—DeepSeek-R1, OpenAI o1, o3-mini, and Gemini 2.0 Flash-Thinking—using 5,888 ophthalmology multiple-choice questions under a zero-shot evaluation paradigm. A dual-track framework integrates multidimensional automated metrics (Macro-F1, ROUGE-L, METEOR, BERTScore, BARTScore, AlignScore) with blinded expert review to quantify accuracy, generative quality, and clinical reasoning structure completeness. Contribution/Results: o1 achieves the highest accuracy (0.902); o3-mini and o1 lead in reasoning alignment (AlignScore); DeepSeek-R1 and Gemini 2.0 excel in expert-rated reasoning detail; Gemini 2.0 attains the fastest inference speed (6.7 seconds). The work establishes a reproducible, multimodal evaluation methodology for reasoning-capable LLMs in healthcare verticals.

Technology Category

Application Category

📝 Abstract
Recent advances in reasoning-focused large language models (LLMs) mark a shift from general LLMs toward models designed for complex decision-making, a crucial aspect in medicine. However, their performance in specialized domains like ophthalmology remains underexplored. This study comprehensively evaluated and compared the accuracy and reasoning capabilities of four newly developed reasoning-focused LLMs, namely DeepSeek-R1, OpenAI o1, o3-mini, and Gemini 2.0 Flash-Thinking. Each model was assessed using 5,888 multiple-choice ophthalmology exam questions from the MedMCQA dataset in zero-shot setting. Quantitative evaluation included accuracy, Macro-F1, and five text-generation metrics (ROUGE-L, METEOR, BERTScore, BARTScore, and AlignScore), computed against ground-truth reasonings. Average inference time was recorded for a subset of 100 randomly selected questions. Additionally, two board-certified ophthalmologists qualitatively assessed clarity, completeness, and reasoning structure of responses to differential diagnosis questions.O1 (0.902) and DeepSeek-R1 (0.888) achieved the highest accuracy, with o1 also leading in Macro-F1 (0.900). The performance of models across the text-generation metrics varied: O3-mini excelled in ROUGE-L (0.151), o1 in METEOR (0.232), DeepSeek-R1 and o3-mini tied for BERTScore (0.673), DeepSeek-R1 (-4.105) and Gemini 2.0 Flash-Thinking (-4.127) performed best in BARTScore, while o3-mini (0.181) and o1 (0.176) led AlignScore. Inference time across the models varied, with DeepSeek-R1 being slowest (40.4 seconds) and Gemini 2.0 Flash-Thinking fastest (6.7 seconds). Qualitative evaluation revealed that DeepSeek-R1 and Gemini 2.0 Flash-Thinking tended to provide detailed and comprehensive intermediate reasoning, whereas o1 and o3-mini displayed concise and summarized justifications.
Problem

Research questions and friction points this paper is trying to address.

Evaluating reasoning-focused LLMs in ophthalmology exams
Comparing accuracy and reasoning of four new LLMs
Assessing model performance using quantitative and qualitative metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated reasoning-focused LLMs in ophthalmology
Used 5,888 exam questions for zero-shot testing
Combined quantitative and qualitative expert assessments
M
Minjie Zou
Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
S
Sahana Srinivasan
Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
T
Thaddaeus Wai Soon Lo
Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
Ke Zou
Ke Zou
Apple, Inc
Power electronicsSwitched-capacitor ConverterPower Semiconductor Devices
G
Gabriel Dawei Yang
Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
Xuguang Ai
Xuguang Ai
Biomedical Informatics & Data Science, Yale University
AI in HealthcareData ScienceNLPBiomedical Informatics
Hyunjae Kim
Hyunjae Kim
Yale University
Natural Language ProcessingBiomedical InformaticsHealthcare
M
Maxwell Singer
Department of Ophthalmology and Visual Science, Yale School of Medicine, Yale University, New Haven, USA
Fares Antaki
Fares Antaki
Cleveland Clinic Cole Eye Institute
OphthalmologyRetinaVitreoretinal surgeryArtificial intelligenceLarge language models
K
Kelvin Li
Department of Ophthalmology, Tan Tock Seng Hospital, National Healthcare Group, Singapore
Robert Chang
Robert Chang
Department of Ophthalmology, Byers Eye Institute, Stanford University, Stanford, California, USA
M
Marcus Tan
Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
D
David Ziyou Chen
Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
Dianbo Liu
Dianbo Liu
Assistant professor, National University of Singapore
Push the limits of humanmachine learningbiomedical sciences
Qingyu Chen
Qingyu Chen
Biomedical Informatics & Data Science, Yale University; NCBI-NLM, National Institutes of Health
Text miningMachine learningData curationBioNLPMedical Imaging Analysis
Yih Chung Tham
Yih Chung Tham
Yong Loo Lin School of Medicine, National University of Singapore; Singapore Eye Research Institute
OphthalmologyEpidemiologyVisual ImpairmentDeep Learning