🤖 AI Summary
To address challenges in cross-domain segmentation of cardiac ultrasound images—including incomplete pseudo-labels, substantial quality discrepancies between source and target domains, and strong reliance on supervision—this paper proposes a human-centered domain adaptation framework. It introduces clinician gaze trajectories as human cognitive priors into ultrasound analysis for the first time. Methodologically, we design a gaze-enhanced alignment module and a gaze-balanced loss to enforce cognitive-guided structural consistency in feature space; further integrating gaze heatmap modeling, multimodal losses, and unsupervised/semi-supervised learning strategies. Evaluated on multiple clinical datasets, our approach significantly outperforms GAN-based and self-training baselines, achieving over 8% improvement in average segmentation accuracy. The framework delivers clinically interpretable outputs and demonstrates strong translational potential for real-world deployment.
📝 Abstract
Domain adaptation (DA) for cardiac ultrasound image segmentation is clinically significant and valuable. However, previous domain adaptation methods are prone to be affected by the incomplete pseudo-label and low-quality target to source images. Human-centric domain adaptation has great advantages of human cognitive guidance to help model adapt to target domain and reduce reliance on labels. Doctor gaze trajectories contains a large amount of cross-domain human guidance. To leverage gaze information and human cognition for guiding domain adaptation, we propose gaze-assisted human-centric domain adaptation (GAHCDA), which reliably guides the domain adaptation of cardiac ultrasound images. GAHCDA includes following modules: (1) Gaze Augment Alignment (GAA): GAA enables the model to obtain human cognition general features to recognize segmentation target in different domain of cardiac ultrasound images like humans. (2) Gaze Balance Loss (GBL): GBL fused gaze heatmap with outputs which makes the segmentation result structurally closer to the target domain. The experimental results illustrate that our proposed framework is able to segment cardiac ultrasound images more effectively in the target domain than GAN-based methods and other self-train based methods, showing great potential in clinical application.