🤖 AI Summary
To address the scarcity of high-quality annotated data and the high cost of manual annotation in medical image segmentation, this paper proposes a labeling-efficient multi-task semi-supervised framework. Methodologically, we build upon the Segment Anything model architecture and introduce a novel gradient feedback mechanism to establish a learnable coupling between the segmentation and auxiliary classification branches; class activation maps (CAMs) generated by the classifier guide segmentation training, while multi-task learning and consistency regularization jointly optimize the semi-supervised objective. Our key contribution is the first integration of CAM-driven attention guidance with gradient-based collaborative optimization, significantly enhancing domain adaptability under extremely low labeling ratios. Evaluated on real-world clinical rehabilitation imaging data, our method achieves a 12.7% Dice score improvement over fully supervised and state-of-the-art semi-supervised baselines, maintaining high robustness even with only 10% labeled data.
📝 Abstract
Medical image segmentation is a key task in the imaging workflow, influencing many image-based decisions. Traditional, fully-supervised segmentation models rely on large amounts of labeled training data, typically obtained through manual annotation, which can be an expensive, time-consuming, and error-prone process. This signals a need for accurate, automatic, and annotation-efficient methods of training these models. We propose ADA-SAM (automated, domain-specific, and adaptive segment anything model), a novel multitask learning framework for medical image segmentation that leverages class activation maps from an auxiliary classifier to guide the predictions of the semi-supervised segmentation branch, which is based on the Segment Anything (SAM) framework. Additionally, our ADA-SAM model employs a novel gradient feedback mechanism to create a learnable connection between the segmentation and classification branches by using the segmentation gradients to guide and improve the classification predictions. We validate ADA-SAM on real-world clinical data collected during rehabilitation trials, and demonstrate that our proposed method outperforms both fully-supervised and semi-supervised baselines by double digits in limited label settings. Our code is available at: https://github.com/tbwa233/ADA-SAM.