🤖 AI Summary
This work addresses the challenge of ophthalmic diagnosis and treatment, which relies on subtle lesions and their temporal evolution in multimodal retinal imaging, yet existing medical foundation models are predominantly static and struggle with modality discrepancies and variations in image quality. To overcome this limitation, the study introduces dynamic system modeling into ophthalmic AI for the first time, conceptualizing the eye as a partially observable dynamic system and constructing a unified generative world model. By leveraging a shared latent state, the model achieves cross-modal alignment, temporally consistent state representations, and structure-preserving cross-modal translation, further enhanced by longitudinal temporal supervision. The proposed approach significantly improves robustness in multimodal retinal image understanding, enabling high-quality cross-modal synthesis and anatomically stable prediction of lesion progression.
📝 Abstract
Ophthalmic decision-making depends on subtle lesion-scale cues interpreted across multimodal imaging and over time, yet most medical foundation models remain static and degrade under modality and acquisition shifts. Here we introduce EyeWorld, a generative world model that conceptualizes the eye as a partially observed dynamical system grounded in clinical imaging. EyeWorld learns an observation-stable latent ocular state shared across modalities, unifying fine-grained parsing, structure-preserving cross-modality translation and quality-robust enhancement within a single framework. Longitudinal supervision further enables time-conditioned state transitions, supporting forecasting of clinically meaningful progression while preserving stable anatomy. By moving from static representation learning to explicit dynamical modeling, EyeWorld provides a unified approach to robust multimodal interpretation and prognosis-oriented simulation in medicine.