🤖 AI Summary
This work addresses the limitations of existing large vision-language models (VLMs) in ophthalmic diagnosis, which often suffer from insufficient domain-specific perception and reasoning decoupled from visual evidence due to a lack of expert knowledge. To bridge this gap, the authors propose the EyExIn framework, which deeply integrates ophthalmological expertise into retinal VLMs through a dual-stream expert-aware encoder, semantic-adaptive gating fusion, and a residual visual anchoring mechanism. This design ensures persistent alignment of visual evidence within deep language model layers, effectively enhancing pathological feature representation while suppressing misleading language priors. Evaluated on four ophthalmic visual question answering benchmarks, EyExIn achieves state-of-the-art performance, significantly outperforming leading closed-source systems and demonstrating improved fine-grained lesion recognition and trustworthy reasoning capabilities.
📝 Abstract
Large Vision Language Models (LVLMs) show immense potential for automated ophthalmic diagnosis. However, their clinical deployment is severely hindered by lacking domain-specific knowledge. In this work, we identify two structural deficiencies hindering reliable medical reasoning: 1) the Perception Gap, where general-purpose visual encoders fail to resolve fine-grained pathological cues (e.g., microaneurysms); and 2) the Reasoning Gap, where sparse visual evidence is progressively overridden by massive language priors in deeper transformer layers, leading to ungrounded hallucinations. To bridge these gaps, we propose EyExIn, a data-efficient framework designed to anchor retinal VLMs with expert knowledge via a Deep Expert Injection mechanism. Our architecture employs an Expert-Aware Dual-Stream encoding strategy that decouples visual representation into a general stream for anatomical context and a specialized expert stream for pathological semantics. To ensure high-fidelity integration, we design a Semantic-Adaptive Gated Fusion module, which dynamically amplifies subtle lesion signals while filtering irrelevant background noise. Furthermore, we introduce Adaptive Deep Expert Injection to embed persistent"Vision Anchors"by integrating fused visual features as residual biases directly into intermediate LLM layers. This mechanism creates a visual shortcut that forces the reasoning stack to remain strictly grounded in visual evidence. Extensive experiments across four benchmarks demonstrate that our model consistently outperforms massive proprietary systems. EyExIn significantly enhances domain-specific knowledge embedding and achieves state-of-the-art precision in ophthalmic visual question answering, advancing the development of trustworthy ophthalmic AI.