🤖 AI Summary
This paper addresses cross-modal medical image segmentation under source-free unsupervised domain adaptation (SFUDA), where neither source-domain data nor target-domain annotations are accessible, and domain shift severely degrades performance. To tackle this, we propose HEAL—the first learning-free SFUDA framework—featuring three key innovations: (1) hierarchical autoencoder-based denoising to suppress modality-specific noise; (2) edge-guided pseudo-label selection to enhance label confidence; and (3) size-aware feature fusion that incorporates anatomical priors for adaptive feature reweighting. Crucially, HEAL operates without any network parameter fine-tuning. Extensive experiments across multi-center, multi-modal datasets demonstrate that HEAL consistently outperforms state-of-the-art methods, achieving new SOTA performance. Its robustness and generalizability are rigorously validated under diverse domain-shift scenarios, confirming the effectiveness of learning-free adaptation in cross-modal medical segmentation.
📝 Abstract
Growing demands for clinical data privacy and storage constraints have spurred advances in Source Free Unsupervised Domain Adaptation (SFUDA). SFUDA addresses the domain shift by adapting models from the source domain to the unseen target domain without accessing source data, even when target-domain labels are unavailable. However, SFUDA faces significant challenges: the absence of source domain data and label supervision in the target domain due to source free and unsupervised settings. To address these issues, we propose HEAL, a novel SFUDA framework that integrates Hierarchical denoising, Edge-guided selection, size-Aware fusion, and Learning-free characteristic. Large-scale cross-modality experiments demonstrate that our method outperforms existing SFUDA approaches, achieving state-of-the-art (SOTA) performance. The source code is publicly available at: https://github.com/derekshiii/HEAL.