π€ AI Summary
Existing ultrasound image segmentation methods suffer from poor generalizability and limited cross-organ and multi-task transferability. Method: This paper proposes the first general-purpose semi-supervised ultrasound segmentation framework, featuring a shared visual encoder and a prompt-driven dual-decoder architecture for flexible task adaptation, along with a novel Uncertainty-guided Pseudo-Label Calibration (UPLC) module to enhance the reliability of unlabeled data utilization. Contribution/Results: Trained jointly on multi-source ultrasound data spanning five organs and eight segmentation tasks, the framework significantly outperforms state-of-the-art supervised and semi-supervised methods across multiple metrics. It establishes a new benchmark for generalizable ultrasound segmentation and provides a practical pathway for broad clinical deployment under low-labeling-cost constraints.
π Abstract
Existing approaches for the problem of ultrasound image segmentation, whether supervised or semi-supervised, are typically specialized for specific anatomical structures or tasks, limiting their practical utility in clinical settings. In this paper, we pioneer the task of universal semi-supervised ultrasound image segmentation and propose ProPL, a framework that can handle multiple organs and segmentation tasks while leveraging both labeled and unlabeled data. At its core, ProPL employs a shared vision encoder coupled with prompt-guided dual decoders, enabling flexible task adaptation through a prompting-upon-decoding mechanism and reliable self-training via an uncertainty-driven pseudo-label calibration (UPLC) module. To facilitate research in this direction, we introduce a comprehensive ultrasound dataset spanning 5 organs and 8 segmentation tasks. Extensive experiments demonstrate that ProPL outperforms state-of-the-art methods across various metrics, establishing a new benchmark for universal ultrasound image segmentation.