🤖 AI Summary
Medical image segmentation is hindered by scarce annotated data and poor cross-modal generalization. Existing few-shot methods require target-domain fine-tuning, while foundational models like SAM rely on parameter adaptation. This paper proposes the first zero-fine-tuning retrieval-augmented few-shot segmentation framework. It leverages DINOv2 to extract query image features and retrieves semantically similar annotated cases from an external repository; the retrieved samples then condition SAM 2’s memory attention mechanism to generate precise segmentations. Crucially, the method involves no parameter updates or modality-specific adaptation, establishing the first end-to-end integration of DINOv2-based retrieval and SAM 2’s memory architecture. Evaluated on few-shot segmentation tasks across CT, MRI, and ultrasound modalities, it achieves state-of-the-art performance, significantly improving clinical annotation efficiency and cross-modal robustness.
📝 Abstract
Medical image segmentation is crucial for clinical decision-making, but the scarcity of annotated data presents significant challenges. Few-shot segmentation (FSS) methods show promise but often require training on the target domain and struggle to generalize across different modalities. Similarly, adapting foundation models like the Segment Anything Model (SAM) for medical imaging has limitations, including the need for finetuning and domain-specific adaptation. To address these issues, we propose a novel method that adapts DINOv2 and Segment Anything Model 2 (SAM 2) for retrieval-augmented few-shot medical image segmentation. Our approach uses DINOv2's feature as query to retrieve similar samples from limited annotated data, which are then encoded as memories and stored in memory bank. With the memory attention mechanism of SAM 2, the model leverages these memories as conditions to generate accurate segmentation of the target image. We evaluated our framework on three medical image segmentation tasks, demonstrating superior performance and generalizability across various modalities without the need for any retraining or finetuning. Overall, this method offers a practical and effective solution for few-shot medical image segmentation and holds significant potential as a valuable annotation tool in clinical applications.