🤖 AI Summary
Existing evaluation benchmarks for ophthalmology-specific large language models (LLMs) suffer from narrow coverage and an overreliance on accuracy metrics, lacking systematic assessment of clinical reasoning capabilities. Method: We introduce BELO—the first standardized, multi-round expert-validated benchmark for ophthalmology—comprising 900 high-quality multiple-choice questions synthesized from five authoritative data sources. Questions undergo keyword-based filtering and PubMedBERT fine-tuning, followed by three rounds of rigorous review by ophthalmology specialists to ensure domain fidelity and evaluation fairness. Our hybrid evaluation protocol integrates automatic metrics (accuracy, macro-F1, ROUGE-L, BERTScore, AlignScore) with expert adjudication. Contribution/Results: BELO is publicly released with a dynamic leaderboard. Evaluating six state-of-the-art LLMs reveals significant deficiencies in generating complete, clinically grounded explanations—demonstrating BELO’s effectiveness and reliability in fine-grained performance differentiation.
📝 Abstract
Current benchmarks evaluating large language models (LLMs) in ophthalmology are limited in scope and disproportionately prioritise accuracy. We introduce BELO (BEnchmarking LLMs for Ophthalmology), a standardized and comprehensive evaluation benchmark developed through multiple rounds of expert checking by 13 ophthalmologists. BELO assesses ophthalmology-related clinical accuracy and reasoning quality. Using keyword matching and a fine-tuned PubMedBERT model, we curated ophthalmology-specific multiple-choice-questions (MCQs) from diverse medical datasets (BCSC, MedMCQA, MedQA, BioASQ, and PubMedQA). The dataset underwent multiple rounds of expert checking. Duplicate and substandard questions were systematically removed. Ten ophthalmologists refined the explanations of each MCQ's correct answer. This was further adjudicated by three senior ophthalmologists. To illustrate BELO's utility, we evaluated six LLMs (OpenAI o1, o3-mini, GPT-4o, DeepSeek-R1, Llama-3-8B, and Gemini 1.5 Pro) using accuracy, macro-F1, and five text-generation metrics (ROUGE-L, BERTScore, BARTScore, METEOR, and AlignScore). In a further evaluation involving human experts, two ophthalmologists qualitatively reviewed 50 randomly selected outputs for accuracy, comprehensiveness, and completeness. BELO consists of 900 high-quality, expert-reviewed questions aggregated from five sources: BCSC (260), BioASQ (10), MedMCQA (572), MedQA (40), and PubMedQA (18). A public leaderboard has been established to promote transparent evaluation and reporting. Importantly, the BELO dataset will remain a hold-out, evaluation-only benchmark to ensure fair and reproducible comparisons of future models.