SAMed-2: Selective Memory Enhanced Medical Segment Anything Model

📅 2025-07-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address challenges in medical image segmentation—including modality diversity, substantial annotation noise, and catastrophic forgetting in continual learning—this paper proposes a general-purpose segmentation framework based on selective memory enhancement. Methodologically, building upon SAM-2, we introduce a temporal-aware image encoding adapter and a confidence-driven dynamic memory mechanism to construct a high-confidence feature memory bank, enabling cross-task feature preservation and noise-robust modeling. Our approach innovatively integrates vision-spatial temporal modeling with differentiable memory writing and retrieval. The model is trained on MedBank-100k, a large-scale, multi-task medical imaging dataset. Experiments demonstrate state-of-the-art performance on both internal benchmarks and ten external medical datasets, significantly improving cross-modal generalization and continual learning stability.

Technology Category

Application Category

📝 Abstract
Recent "segment anything" efforts show promise by learning from large-scale data, but adapting such models directly to medical images remains challenging due to the complexity of medical data, noisy annotations, and continual learning requirements across diverse modalities and anatomical structures. In this work, we propose SAMed-2, a new foundation model for medical image segmentation built upon the SAM-2 architecture. Specifically, we introduce a temporal adapter into the image encoder to capture image correlations and a confidence-driven memory mechanism to store high-certainty features for later retrieval. This memory-based strategy counters the pervasive noise in large-scale medical datasets and mitigates catastrophic forgetting when encountering new tasks or modalities. To train and evaluate SAMed-2, we curate MedBank-100k, a comprehensive dataset spanning seven imaging modalities and 21 medical segmentation tasks. Our experiments on both internal benchmarks and 10 external datasets demonstrate superior performance over state-of-the-art baselines in multi-task scenarios. The code is available at: https://github.com/ZhilingYan/Medical-SAM-Bench.
Problem

Research questions and friction points this paper is trying to address.

Adapting segment anything models to complex medical images
Addressing noisy annotations in large-scale medical datasets
Mitigating catastrophic forgetting in continual learning scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Temporal adapter captures image correlations
Confidence-driven memory stores high-certainty features
MedBank-100k dataset spans seven modalities
🔎 Similar Papers
No similar papers found.