🤖 AI Summary
This study introduces the first interpretable multimodal large language model (MLLM) for glaucoma screening, designed to jointly perform optical coherence tomography (OCT) optic nerve head circular scan image quality assessment and structured clinical report generation. Methodologically, we fine-tune Llama 3.2 Vision-Instruct in an end-to-end manner using paired OCT images and synthetically generated structured reports—including glaucoma diagnosis and quantitative retinal nerve fiber layer (RNFL) thinning analysis across 12 sectors—and evaluate performance using accuracy, F1-score, BLEU, ROUGE, and BERTScore. Our key contribution is the first application of MLLMs to jointly model OCT quality triage and anatomically partitioned quantitative reporting, enabling simultaneous disease classification and sector-level localization. Experiments demonstrate strong performance: image quality classification accuracy of 0.90 (specificity = 0.98), glaucoma detection accuracy of 0.86 (sensitivity = 0.91, F1 = 0.91), and RNFL thinning prediction accuracy per sector ranging from 0.83 to 0.94; generated textual reports exhibit high semantic fidelity to expert annotations.
📝 Abstract
Objective: To develop an explainable multimodal large language model (MM-LLM) that (1) screens optic nerve head (ONH) OCT circle scans for quality and (2) generates structured clinical reports that include glaucoma diagnosis and sector-wise retinal nerve fiber layer (RNFL) thinning assessments. Design: Retrospective cohort study of 1,310 subjects contributing 43,849 Spectralis ONH OCT circle scans (1,331 glaucomatous and 867 healthy eyes) from the DIGS and ADAGES cohorts. Methods: A MM-LLM (Llama 3.2 Vision-Instruct model) was fine-tuned to generate clinical descriptions of OCT imaging data. Training data included paired OCT images and automatically generated, structured clinical reports that described global and sectoral RNFL thinning. Poor-quality scans were labeled as unusable and paired with a fixed refusal statement. The model was evaluated on a held-out test set for three tasks: quality assessment, glaucoma detection, and RNFL thinning classification across seven anatomical sectors. Evaluation metrics included accuracy, sensitivity, specificity, precision, and F1-score. Model description quality was also evaluated using standard text evaluation metrics. Results: The model achieved 0.90 accuracy and 0.98 specificity for quality triage. For glaucoma detection, accuracy was 0.86 (sensitivity 0.91, specificity 0.73, F1-score 0.91). RNFL thinning prediction accuracy ranged from 0.83 to 0.94, with highest performance in global and temporal sectors. Text generation scores showed strong alignment with reference reports (BLEU: 0.82; ROUGE-1: 0.94; ROUGE-2: 0.87; ROUGE-L: 0.92; BERTScore-F1: 0.99). Conclusions: The fine-tuned MM-LLM generated accurate clinical descriptions based on OCT imaging. The model achieved high accuracy in identifying image quality issues and detecting glaucoma. The model also provided sectoral descriptions of RNFL thinning to help support clinical OCT evaluation.