🤖 AI Summary
Existing medical multimodal large language models (e.g., LLaVA-Med) struggle to jointly leverage color fundus photography (CFP) and optical coherence tomography (OCT) images, and exhibit limited capability in clinically interpreting OCT-derived quantitative biomarkers.
Method: We propose a “quantitative-to-qualitative” diagnostic chain-of-thought paradigm, integrating CLIP-style cross-modal alignment, knowledge-guided instruction generation, and LoRA-based fine-tuning on the 7B-parameter Qwen2 foundation model to build an ophthalmology-specific multimodal large language model (MLLM) with clinical reasoning capacity.
Contribution/Results: The model enables fine-grained lesion localization and interpretable diagnostic reasoning. On our proprietary ophthalmic benchmark, the 7B variant outperforms a 32B baseline model and surpasses OpenAI o3 in both diagnostic report quality and clinical fine-grained evaluation, significantly enhancing CFP–OCT multimodal synergistic interpretation.
📝 Abstract
Multimodal large language models (MLLMs) hold promise for integrating diverse data modalities, but current medical adaptations such as LLaVA-Med often fail to fully exploit the synergy between color fundus photography (CFP) and optical coherence tomography (OCT), and offer limited interpretability of quantitative biomarkers. We introduce GROK, a grounded multimodal large language model that jointly processes CFP, OCT, and text to deliver clinician-grade diagnoses of ocular and systemic disease. GROK comprises three core modules: Knowledge-Guided Instruction Generation, CLIP-Style OCT-Biomarker Alignment, and Supervised Instruction Fine-Tuning, which together establish a quantitative-to-qualitative diagnostic chain of thought, mirroring real clinical reasoning when producing detailed lesion annotations. To evaluate our approach, we introduce the Grounded Ophthalmic Understanding benchmark, which covers six disease categories and three tasks: macro-level diagnostic classification, report generation quality, and fine-grained clinical assessment of the generated chain of thought. Experiments show that, with only LoRA (Low-Rank Adaptation) fine-tuning of a 7B-parameter Qwen2 backbone, GROK outperforms comparable 7B and 32B baselines on both report quality and fine-grained clinical metrics, and even exceeds OpenAI o3. Code and data are publicly available in the GROK repository.