🤖 AI Summary
To address domain shift in medical image segmentation caused by SAM2’s reliance on manual prompts and natural-image pretraining, this paper proposes SAM2-SGP—a support-set-guided prompting framework. It automatically generates initial prompts via a pseudo-mask generation module and dynamically refines SAM2’s visual features using a novel pseudo-mask attention mechanism. Coupled with low-rank adaptation (LoRA)-based fine-tuning, SAM2-SGP enables end-to-end zero-shot segmentation across modalities (2D/3D) and multi-sequence medical images—without any hand-crafted prompts. By mitigating domain transfer barriers, it significantly enhances generalizability. Extensive evaluations on major medical imaging benchmarks demonstrate that SAM2-SGP consistently outperforms state-of-the-art methods—including nnUNet, SwinUNet, the original SAM2, and MedSAM2—achieving new benchmark performance in segmentation accuracy.
📝 Abstract
Although new vision foundation models such as Segment Anything Model 2 (SAM2) have significantly enhanced zero-shot image segmentation capabilities, reliance on human-provided prompts poses significant challenges in adapting SAM2 to medical image segmentation tasks. Moreover, SAM2's performance in medical image segmentation was limited by the domain shift issue, since it was originally trained on natural images and videos. To address these challenges, we proposed SAM2 with support-set guided prompting (SAM2-SGP), a framework that eliminated the need for manual prompts. The proposed model leveraged the memory mechanism of SAM2 to generate pseudo-masks using image-mask pairs from a support set via a Pseudo-mask Generation (PMG) module. We further introduced a novel Pseudo-mask Attention (PMA) module, which used these pseudo-masks to automatically generate bounding boxes and enhance localized feature extraction by guiding attention to relevant areas. Furthermore, a low-rank adaptation (LoRA) strategy was adopted to mitigate the domain shift issue. The proposed framework was evaluated on both 2D and 3D datasets across multiple medical imaging modalities, including fundus photography, X-ray, computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasound. The results demonstrated a significant performance improvement over state-of-the-art models, such as nnUNet and SwinUNet, as well as foundation models, such as SAM2 and MedSAM2, underscoring the effectiveness of the proposed approach. Our code is publicly available at https://github.com/astlian9/SAM_Support.