Forgetting-Resistant and Lesion-Aware Source-Free Domain Adaptive Fundus Image Analysis with Vision-Language Model

πŸ“… 2026-02-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of source-free domain adaptation for fundus image analysis, where existing methods often suffer from catastrophic forgetting and struggle to leverage fine-grained lesion knowledge embedded in vision-language models (VLMs). To mitigate category forgetting without access to source data, the proposed approach preserves class-level knowledge through high-confidence predictions on target samples. Furthermore, it introduces a patch-based, lesion-aware module that extracts localized pathological semantics from a pre-trained VLM to guide the model’s attention toward diagnostically relevant regions. Extensive experiments across multiple fundus datasets demonstrate that the method significantly outperforms current state-of-the-art approaches and the original VLM, confirming its effectiveness and robustness in real-world cross-domain retinal image analysis.

Technology Category

Application Category

πŸ“ Abstract
Source-free domain adaptation (SFDA) aims to adapt a model trained in the source domain to perform well in the target domain, with only unlabeled target domain data and the source model. Taking into account that conventional SFDA methods are inevitably error-prone under domain shift, recently greater attention has been directed to SFDA assisted with off-the-shelf foundation models, e.g., vision-language (ViL) models. However, existing works of leveraging ViL models for SFDA confront two issues: (i) Although mutual information is exploited to consider the joint distribution between the predictions of ViL model and the target model, we argue that the forgetting of some superior predictions of the target model still occurs, as indicated by the decline of the accuracies of certain classes during adaptation; (ii) Prior research disregards the rich, fine-grained knowledge embedded in the ViL model, which offers detailed grounding for fundus image diagnosis. In this paper, we introduce a novel forgetting-resistant and lesion-aware (FRLA) method for SFDA of fundus image diagnosis with ViL model. Specifically, a forgetting-resistant adaptation module explicitly preserves the confident predictions of the target model, and a lesion-aware adaptation module yields patch-wise predictions from ViL model and employs them to help the target model be aware of the lesion areas and leverage the ViL model's fine-grained knowledge. Extensive experiments show that our method not only significantly outperforms the vision-language model, but also achieves consistent improvements over the state-of-the-art methods. Our code will be released.
Problem

Research questions and friction points this paper is trying to address.

source-free domain adaptation
forgetting
lesion-aware
fundus image analysis
vision-language model
Innovation

Methods, ideas, or system contributions that make the work stand out.

source-free domain adaptation
vision-language model
forgetting resistance
lesion-aware
fundus image analysis
πŸ”Ž Similar Papers
No similar papers found.
Z
Zheang Huai
The Hong Kong University of Science and Technology, Kowloon, Hong Kong
Hui Tang
Hui Tang
The Hong Kong University of Science and Technology
Computer Vision and Pattern RecognitionMachine Learning
H
Hualiang Wang
The Hong Kong University of Science and Technology, Kowloon, Hong Kong
Xiaomeng Li
Xiaomeng Li
Assistant Professor, The Hong Kong University of Science and Technology
Medical Image AnalysisAI in HealthcareDeep Learning