AMD-Mamba: A Phenotype-Aware Multi-Modal Framework for Robust AMD Prognosis

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses two key challenges in age-related macular degeneration (AMD) prognosis: insufficient model robustness and difficulty in fusing heterogeneous multimodal data. To this end, we propose a phenotype-aware multimodal prognostic framework. Methodologically, we innovatively embed AMD severity scores into metric learning to guide a Vision Mamba model in jointly modeling color fundus images—capturing both localized lesions and global vascular patterns—alongside genetic variants and sociodemographic variables. We further introduce a multi-scale feature fusion mechanism and cross-modal alignment to enable synergistic representation of clinical, imaging, and genetic information across resolutions. Validated on the AREDS cohort, our framework identifies a novel biomarker—one of the most significant predictors of AMD progression reported to date—and substantially improves early high-risk patient identification (AUC increase). The resulting prognostic tool is both interpretable and deployable, supporting precision intervention in clinical practice.

Technology Category

Application Category

📝 Abstract
Age-related macular degeneration (AMD) is a leading cause of irreversible vision loss, making effective prognosis crucial for timely intervention. In this work, we propose AMD-Mamba, a novel multi-modal framework for AMD prognosis, and further develop a new AMD biomarker. This framework integrates color fundus images with genetic variants and socio-demographic variables. At its core, AMD-Mamba introduces an innovative metric learning strategy that leverages AMD severity scale score as prior knowledge. This strategy allows the model to learn richer feature representations by aligning learned features with clinical phenotypes, thereby improving the capability of conventional prognosis methods in capturing disease progression patterns. In addition, unlike existing models that use traditional CNN backbones and focus primarily on local information, such as the presence of drusen, AMD-Mamba applies Vision Mamba and simultaneously fuses local and long-range global information, such as vascular changes. Furthermore, we enhance prediction performance through multi-scale fusion, combining image information with clinical variables at different resolutions. We evaluate AMD-Mamba on the AREDS dataset, which includes 45,818 color fundus photographs, 52 genetic variants, and 3 socio-demographic variables from 2,741 subjects. Our experimental results demonstrate that our proposed biomarker is one of the most significant biomarkers for the progression of AMD. Notably, combining this biomarker with other existing variables yields promising improvements in detecting high-risk AMD patients at early stages. These findings highlight the potential of our multi-modal framework to facilitate more precise and proactive management of AMD.
Problem

Research questions and friction points this paper is trying to address.

Develops a multi-modal framework for AMD prognosis using diverse data inputs
Introduces a new biomarker to improve early detection of high-risk AMD patients
Enhances feature representation by aligning clinical phenotypes with learned features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates fundus images, genetics, socio-demographics
Uses metric learning with clinical phenotype alignment
Applies Vision Mamba for local-global fusion
🔎 Similar Papers
No similar papers found.