Universal Vessel Segmentation for Multi-Modality Retinal Images

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current retinal vessel segmentation methods suffer from modality-specific dependency and poor cross-modal generalization: mainstream approaches are designed exclusively for color fundus images, while other clinically prevalent modalities—such as multi-color scanning laser ophthalmoscopy (MC-SLO)—lack universal segmentation models. Existing attempts to extend segmentation to new modalities still require modality-specific fine-tuning and additional annotated data. To address this, we propose UVSM, the first cross-modal universal retinal vessel segmentation model, built upon a unified encoder-decoder architecture. UVSM incorporates modality-adaptive normalization and multi-scale feature disentanglement to enable zero-shot deployment across six clinical imaging modalities—including color fundus and MC-SLO—without fine-tuning. Trained jointly on multi-source, multi-modal datasets, UVSM achieves a mean Dice score of 84.7%, matching the performance of state-of-the-art modality-specific models while substantially reducing annotation requirements and deployment overhead.

Technology Category

Application Category

📝 Abstract
We identify two major limitations in the existing studies on retinal vessel segmentation: (1) Most existing works are restricted to one modality, i.e, the Color Fundus (CF). However, multi-modality retinal images are used every day in the study of retina and retinal diseases, and the study of vessel segmentation on the other modalities is scarce; (2) Even though a small amount of works extended their experiments to limited new modalities such as the Multi-Color Scanning Laser Ophthalmoscopy (MC), these works still require finetuning a separate model for the new modality. And the finetuning will require extra training data, which is difficult to acquire. In this work, we present a foundational universal vessel segmentation model (UVSM) for multi-modality retinal images. Not only do we perform the study on a much wider range of modalities, but also we propose a universal model to segment the vessels in all these commonly-used modalities. Despite being much more versatile comparing with existing methods, our universal model still demonstrates comparable performance with the state-of-the- art finetuned methods. To the best of our knowledge, this is the first work that achieves cross-modality retinal vessel segmentation and also the first work to study retinal vessel segmentation in some novel modalities.
Problem

Research questions and friction points this paper is trying to address.

Addresses multi-modality retinal vessel segmentation limitations
Proposes a universal model for diverse retinal imaging modalities
Eliminates need for modality-specific finetuning and extra training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Universal model for multi-modality segmentation
Eliminates need for separate model finetuning
Achieves cross-modality retinal vessel segmentation
🔎 Similar Papers
No similar papers found.
B
Bo Wen
Anna Heinke
Anna Heinke
University of California San Diego
A
A. Agnihotri
D
Dirk-Uwe Bartsch
W
William R. Freeman
T
Truong Nguyen
IEEE Fellow
Cheolhong An
Cheolhong An
University of California San Diego