CycleSAM: One-Shot Surgical Scene Segmentation using Cycle-Consistent Feature Matching to Prompt SAM

πŸ“… 2024-07-09
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 3
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Surgical image segmentation faces dual challenges: scarce annotations and poor out-of-distribution generalization; general-purpose models like SAM rely on manual prompts, hindering full automation. This paper proposes the first one-shot automatic segmentation framework tailored for surgical scenarios, generating class-aware point prompts to drive SAM without human intervention. Key contributions include: (1) introducing spatial cyclic consistency constraints to enhance robustness in cross-image feature matching; and (2) designing a surgery-domain-specific self-supervised ResNet50 encoder that significantly reduces domain gaps while preserving annotation efficiency. Evaluated on two major surgical datasets, our method achieves 50% of the performance of fully supervised models, consistently outperforming existing zero-shot and few-shot baselines. It substantially improves both automation capability and cross-domain generalization.

Technology Category

Application Category

πŸ“ Abstract
The recently introduced Segment-Anything Model (SAM) has the potential to greatly accelerate the development of segmentation models. However, directly applying SAM to surgical images has key limitations including (1) the requirement of image-specific prompts at test-time, thereby preventing fully automated segmentation, and (2) ineffectiveness due to substantial domain gap between natural and surgical images. In this work, we propose CycleSAM, an approach for one-shot surgical scene segmentation that uses the training image-mask pair at test-time to automatically identify points in the test images that correspond to each object class, which can then be used to prompt SAM to produce object masks. To produce high-fidelity matches, we introduce a novel spatial cycle-consistency constraint that enforces point proposals in the test image to rematch to points within the object foreground region in the training image. Then, to address the domain gap, rather than directly using the visual features from SAM, we employ a ResNet50 encoder pretrained on surgical images in a self-supervised fashion, thereby maintaining high label-efficiency. We evaluate CycleSAM for one-shot segmentation on two diverse surgical semantic segmentation datasets, comprehensively outperforming baseline approaches and reaching up to 50% of fully-supervised performance.
Problem

Research questions and friction points this paper is trying to address.

Addressing surgical image segmentation with limited annotated data
Improving SAM's robustness for out-of-domain surgical images
Enhancing few-shot segmentation via cycle-consistent feature matching
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages surgery-specific self-supervised feature extractors
Enforces consistency constraints for robust similarity maps
Uses parameter-efficient training for domain adaptation
πŸ”Ž Similar Papers
No similar papers found.
A
Aditya Murali
University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France
Pietro Mascagni
Pietro Mascagni
Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Institute of Image Guided
SurgerySurgical Data ScienceSurgical EducationSurgical Safety
Didier Mutter
Didier Mutter
Professeur de Chirurgie, HΓ΄pitaux Universitaires de Strasbourg
ChirurgieEnseignementInformatique
N
N. Padoy
University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France and IHU Strasbourg, Strasbourg, France