MedSAM-Agent: Empowering Interactive Medical Image Segmentation with Multi-turn Agentic Reinforcement Learning

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing multimodal large language models in medical image segmentation, which typically rely on single-round interactions and lack process-level supervision, often leading to redundant operations. The authors formulate interactive segmentation as a multi-step autonomous decision-making process and propose a reinforcement learning framework for agents that integrates human-inspired heuristic priors. This framework combines a hybrid prompting strategy with a process-level clinical plausibility reward mechanism. Through a two-stage end-to-end training pipeline—first generating expert-level interaction trajectories and then optimizing decisions via verifiable rewards (RLVR)—the method significantly enhances interaction efficiency and conciseness. Evaluated across six imaging modalities and 21 datasets, the approach achieves state-of-the-art performance, effectively unifying autonomous medical reasoning with robust iterative refinement.

Technology Category

Application Category

📝 Abstract
Medical image segmentation is evolving from task-specific models toward generalizable frameworks. Recent research leverages Multi-modal Large Language Models (MLLMs) as autonomous agents, employing reinforcement learning with verifiable reward (RLVR) to orchestrate specialized tools like the Segment Anything Model (SAM). However, these approaches often rely on single-turn, rigid interaction strategies and lack process-level supervision during training, which hinders their ability to fully exploit the dynamic potential of interactive tools and leads to redundant actions. To bridge this gap, we propose MedSAM-Agent, a framework that reformulates interactive segmentation as a multi-step autonomous decision-making process. First, we introduce a hybrid prompting strategy for expert-curated trajectory generation, enabling the model to internalize human-like decision heuristics and adaptive refinement strategies. Furthermore, we develop a two-stage training pipeline that integrates multi-turn, end-to-end outcome verification with a clinical-fidelity process reward design to promote interaction parsimony and decision efficiency. Extensive experiments across 6 medical modalities and 21 datasets demonstrate that MedSAM-Agent achieves state-of-the-art performance, effectively unifying autonomous medical reasoning with robust, iterative optimization. Code is available \href{https://github.com/CUHK-AIM-Group/MedSAM-Agent}{here}.
Problem

Research questions and friction points this paper is trying to address.

interactive medical image segmentation
multi-turn interaction
process-level supervision
redundant actions
autonomous agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-turn agentic reinforcement learning
interactive medical image segmentation
hybrid prompting
process reward
autonomous medical reasoning
🔎 Similar Papers
No similar papers found.