Retrieval-augmented Few-shot Medical Image Segmentation with Foundation Models

📅 2024-08-16
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Medical image segmentation is hindered by scarce annotated data and poor cross-modal generalization. Existing few-shot methods require target-domain fine-tuning, while foundational models like SAM rely on parameter adaptation. This paper proposes the first zero-fine-tuning retrieval-augmented few-shot segmentation framework. It leverages DINOv2 to extract query image features and retrieves semantically similar annotated cases from an external repository; the retrieved samples then condition SAM 2’s memory attention mechanism to generate precise segmentations. Crucially, the method involves no parameter updates or modality-specific adaptation, establishing the first end-to-end integration of DINOv2-based retrieval and SAM 2’s memory architecture. Evaluated on few-shot segmentation tasks across CT, MRI, and ultrasound modalities, it achieves state-of-the-art performance, significantly improving clinical annotation efficiency and cross-modal robustness.

Technology Category

Application Category

📝 Abstract
Medical image segmentation is crucial for clinical decision-making, but the scarcity of annotated data presents significant challenges. Few-shot segmentation (FSS) methods show promise but often require training on the target domain and struggle to generalize across different modalities. Similarly, adapting foundation models like the Segment Anything Model (SAM) for medical imaging has limitations, including the need for finetuning and domain-specific adaptation. To address these issues, we propose a novel method that adapts DINOv2 and Segment Anything Model 2 (SAM 2) for retrieval-augmented few-shot medical image segmentation. Our approach uses DINOv2's feature as query to retrieve similar samples from limited annotated data, which are then encoded as memories and stored in memory bank. With the memory attention mechanism of SAM 2, the model leverages these memories as conditions to generate accurate segmentation of the target image. We evaluated our framework on three medical image segmentation tasks, demonstrating superior performance and generalizability across various modalities without the need for any retraining or finetuning. Overall, this method offers a practical and effective solution for few-shot medical image segmentation and holds significant potential as a valuable annotation tool in clinical applications.
Problem

Research questions and friction points this paper is trying to address.

Addresses scarcity of annotated medical image data
Improves few-shot segmentation across diverse modalities
Adapts foundation models without retraining or finetuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses DINOv2 for feature-based query retrieval
Leverages SAM 2 memory attention mechanism
Requires no retraining or finetuning
🔎 Similar Papers
No similar papers found.
L
Lin Zhao
United Imaging Intelligence, 65 Blue Sky Drive, Burlington, MA 01803, USA
X
Xiao Chen
United Imaging Intelligence, 65 Blue Sky Drive, Burlington, MA 01803, USA
E
Eric Z. Chen
United Imaging Intelligence, 65 Blue Sky Drive, Burlington, MA 01803, USA
Yikang Liu
Yikang Liu
Shanghai Jiao Tong University
Computational Linguistics
Terrence Chen
Terrence Chen
UII America, Inc.
Medical ImagingImage-guided Interventions and SurgeryArtificial IntelligenceComputer Vision
Shanhui Sun
Shanhui Sun
UII America, Inc.
Machine LearningComputer VisionMedical imaging processingMedical Imaging and Virtual Reality