🤖 AI Summary
Current medical multimodal large language models (MLLMs) rely on explicit spatial annotations for region-of-interest (ROI) localization, limiting their applicability to clinically prevalent implicit queries. To address this, we propose the “Unified Medical Reasoning and Localization” task and introduce the first reasoning-segmentation decoupled framework tailored for implicit queries. Our method pioneers the integration of reinforcement learning into medical vision-language modeling, featuring a dual reward mechanism that jointly optimizes output format fidelity and localization accuracy. Leveraging the 14K high-quality reasoning trajectory dataset U-MRG-14K, our approach achieves end-to-end ROI localization without pixel-level supervision. It employs a frozen segmentation expert as the localizer and an MLLM as the reasoning engine, significantly improving generalization to unseen clinical queries. Evaluated across multiple metrics, our method establishes new state-of-the-art performance, enhancing both accuracy and interpretability in medical image understanding.
📝 Abstract
Accurately grounding regions of interest (ROIs) is critical for diagnosis and treatment planning in medical imaging. While multimodal large language models (MLLMs) combine visual perception with natural language, current medical-grounding pipelines still rely on supervised fine-tuning with explicit spatial hints, making them ill-equipped to handle the implicit queries common in clinical practice. This work makes three core contributions. We first define Unified Medical Reasoning Grounding (UMRG), a novel vision-language task that demands clinical reasoning and pixel-level grounding. Second, we release U-MRG-14K, a dataset of 14K samples featuring pixel-level masks alongside implicit clinical queries and reasoning traces, spanning 10 modalities, 15 super-categories, and 108 specific categories. Finally, we introduce MedReasoner, a modular framework that distinctly separates reasoning from segmentation: an MLLM reasoner is optimized with reinforcement learning, while a frozen segmentation expert converts spatial prompts into masks, with alignment achieved through format and accuracy rewards. MedReasoner achieves state-of-the-art performance on U-MRG-14K and demonstrates strong generalization to unseen clinical queries, underscoring the significant promise of reinforcement learning for interpretable medical grounding.