π€ AI Summary
Surgical image segmentation faces dual challenges: scarce annotations and poor out-of-distribution generalization; general-purpose models like SAM rely on manual prompts, hindering full automation. This paper proposes the first one-shot automatic segmentation framework tailored for surgical scenarios, generating class-aware point prompts to drive SAM without human intervention. Key contributions include: (1) introducing spatial cyclic consistency constraints to enhance robustness in cross-image feature matching; and (2) designing a surgery-domain-specific self-supervised ResNet50 encoder that significantly reduces domain gaps while preserving annotation efficiency. Evaluated on two major surgical datasets, our method achieves 50% of the performance of fully supervised models, consistently outperforming existing zero-shot and few-shot baselines. It substantially improves both automation capability and cross-domain generalization.
π Abstract
The recently introduced Segment-Anything Model (SAM) has the potential to greatly accelerate the development of segmentation models. However, directly applying SAM to surgical images has key limitations including (1) the requirement of image-specific prompts at test-time, thereby preventing fully automated segmentation, and (2) ineffectiveness due to substantial domain gap between natural and surgical images. In this work, we propose CycleSAM, an approach for one-shot surgical scene segmentation that uses the training image-mask pair at test-time to automatically identify points in the test images that correspond to each object class, which can then be used to prompt SAM to produce object masks. To produce high-fidelity matches, we introduce a novel spatial cycle-consistency constraint that enforces point proposals in the test image to rematch to points within the object foreground region in the training image. Then, to address the domain gap, rather than directly using the visual features from SAM, we employ a ResNet50 encoder pretrained on surgical images in a self-supervised fashion, thereby maintaining high label-efficiency. We evaluate CycleSAM for one-shot segmentation on two diverse surgical semantic segmentation datasets, comprehensively outperforming baseline approaches and reaching up to 50% of fully-supervised performance.