🤖 AI Summary
Medical image segmentation faces challenges including scarcity of annotated data, high cost of expert annotations, and reliance on unlabeled data in semi-supervised approaches. To address these, this paper proposes a few-shot segmentation method requiring only a small number of labeled samples. Specifically, we jointly model organ segmentation and boundary prediction within a U-Net architecture, introducing— for the first time—a lightweight boundary prediction task as an auxiliary head embedded directly into the backbone network without adding parameters or computational overhead. Leveraging multi-task learning, boundary-aware loss, and prediction-consistency regularization, our approach strengthens supervision signals and improves generalization. Crucially, it operates without any unlabeled data. On multiple few-shot medical segmentation benchmarks, our method achieves performance comparable to or exceeding state-of-the-art semi-supervised methods, while preserving the original model’s parameter count and inference efficiency.
📝 Abstract
Obtaining large-scale medical data, annotated or unannotated, is challenging due to stringent privacy regulations and data protection policies. In addition, annotating medical images requires that domain experts manually delineate anatomical structures, making the process both time-consuming and costly. As a result, semi-supervised methods have gained popularity for reducing annotation costs. However, the performance of semi-supervised methods is heavily dependent on the availability of unannotated data, and their effectiveness declines when such data are scarce or absent. To overcome this limitation, we propose a simple, yet effective and computationally efficient approach for medical image segmentation that leverages only existing annotations. We propose BoundarySeg , a multi-task framework that incorporates organ boundary prediction as an auxiliary task to full organ segmentation, leveraging consistency between the two task predictions to provide additional supervision. This strategy improves segmentation accuracy, especially in low data regimes, allowing our method to achieve performance comparable to or exceeding state-of-the-art semi supervised approaches all without relying on unannotated data or increasing computational demands. Code will be released upon acceptance.