π€ AI Summary
This work addresses the challenge in medical image segmentation where only a subset of organs is annotated due to clinical prioritization and high annotation costs. To tackle this, the authors propose the IPnP framework, which uniquely integrates foundation modelβguided iterative prompting with a pseudo-labeling mechanism. IPnP leverages a trainable segmentation network (expert) in collaboration with a frozen foundation model (generalist) to iteratively generate and refine pseudo-labels for unannotated organs, progressively recovering full-organ supervision signals. Evaluated on the AMOS dataset, IPnP substantially outperforms existing partial-label methods and achieves performance approaching that of fully supervised upper bounds. Its clinical applicability is further demonstrated on a real-world cohort of 210 head and neck cancer cases with partial annotations.
π Abstract
Automated medical image segmentation has achieved remarkable progress with fully labeled data. However, site-specific clinical priorities and the high cost of manual annotation often yield scans with only a subset of organs labeled, leading to the partially labeled problem that degrades performance. To address this issue, we propose IPnP, an Iteratively Prompting and Pseudo-labeling framework, for partially labeled medical image segmentation. IPnP iteratively generates and refines pseudo-labels for unlabeled organs through collaboration between a trainable segmentation network (specialist) and a frozen foundation model (generalist), progressively recovering full-organ supervision. On the public dataset AMOS with the simulated partial-label setting, IPnP consistently improves segmentation performance over prior methods and approaches the performance of the fully labeled reference. We further evaluate on a private, partially labeled dataset of 210 head-and-neck cancer patients and demonstrate our effectiveness in real-world clinical settings.