Shifting Adaptation from Weight Space to Memory Space: A Memory-Augmented Agent for Medical Image Segmentation

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization of medical image segmentation models across institutions, imaging devices, or patient populations, as well as the high communication overhead of conventional federated fine-tuning. To this end, the authors propose MemSeg-Agent, which pioneers a paradigm shift from parameter-based adaptation to memory-based operations. By leveraging static memory, few-shot memory, and test-time working memory—all while keeping the backbone network fixed—the method unifies few-shot learning, federated learning, and test-time adaptation without requiring model fine-tuning. This enables dynamic adaptation to new domains with substantially reduced communication costs. Experiments demonstrate that static memory alone matches or surpasses strong supervised baselines, and incorporating test-time memory further enhances both in-domain and cross-domain performance, achieving high parameter efficiency and robustness.

Technology Category

Application Category

📝 Abstract
Medical image segmentation is fundamental to clinical workflows, yet models trained on a single dataset often fail to generalize across institutions, scanners, or patient populations. While vision foundation models have shown great promise in addressing this challenge, their deployment typically requires task-specific fine-tuning, which introduces substantial communication overhead in federated learning and prevents continuous knowledge evolution during deployment. In this work, we propose a memory-augmented segmentation agent (MemSeg-Agent) that shifts adaptation from weight space to memory space, enabling few-shot learning, federated supervised learning, and test-time adaptation within a unified architecture. MemSeg-Agent conditions a fixed backbone with lightweight static, few-shot, and test-time working memories, which are dynamically composed by an agentic controller. In federated settings, we update compact memory units instead of model parameters, substantially reducing communication overhead. Experiments on four public datasets demonstrate strong performance and robustness to domain shift: Static memory alone matches or surpasses strong supervised baselines with high parameter efficiency, and test-time working memory further improves in-domain and cross-domain performance without fine-tuning. Overall, MemSeg-Agent introduces a new paradigm for scalable and adaptive medical image segmentation in the era of agentic AI.
Problem

Research questions and friction points this paper is trying to address.

medical image segmentation
domain generalization
federated learning
communication overhead
model adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

memory-augmented agent
weight-free adaptation
federated learning
test-time adaptation
medical image segmentation
🔎 Similar Papers
No similar papers found.
B
Bowen Chen
Department of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, CA 93106, USA
Q
Qiaohui Gao
College of Engineering, Northeastern University, Boston, MA 02115, USA
S
Shaowen Wan
Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ 07102, USA
Shanhui Sun
Shanhui Sun
UII America, Inc.
Machine LearningComputer VisionMedical imaging processingMedical Imaging and Virtual Reality
W
Wei Liu
Department of Radiation Oncology, Mayo Clinic, Scottsdale, AZ 85259, USA
Xiang Li
Xiang Li
Assistant Professor, Massachusetts General Hospital and Harvard Medical School
Medical Foundation ModelMedical InformaticsMulti-modal FusionCausal InferenceBrain
Tianming Liu
Tianming Liu
Distinguished Research Professor of Computer Science, University of Georgia
BrainBrain-Inspired AILLMArtificial General IntelligenceQuantum AI
Lin Zhao
Lin Zhao
New Jersey Institute of Technology
Brain-inspired AIMedical Image AnalysisArtificial General Intelligence