MedReason-R1: Learning to Reason for CT Diagnosis with Reinforcement Learning and Local Zoom

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
General-purpose vision-language models (VLMs) underperform in medical image diagnosis due to scarce high-quality medical visual question answering (VQA) data and insufficient modeling of the clinically essential “coarse-to-fine” diagnostic reasoning process. Method: We introduce CT-RATE-VQA, a large-scale medical CT VQA dataset comprising 84K question-answer pairs that jointly encode global anatomical localization and local lesion characteristics, explicitly supporting multi-level diagnostic reasoning. We further propose region-zoom embeddings and a GRPO-based reinforcement learning framework to optimize fine-grained vision–language alignment without manual pixel-level annotations. Contribution/Results: Our approach achieves state-of-the-art performance on CT disease diagnosis tasks, significantly outperforming both general-purpose and existing medical VLMs. It demonstrates strong generalization across diverse anatomical regions and pathological conditions, validating the efficacy of coarse-to-fine reasoning and annotation-efficient alignment in medical VLMs.

Technology Category

Application Category

📝 Abstract
General-purpose large Vision-Language Models (VLMs) demonstrate strong capabilities in generating detailed descriptions for natural images. However, their performance in the medical domain remains suboptimal, even for relatively straightforward tasks, primarily due to the lack of large-scale, high-quality, specialized medical imaging datasets and the neglect of the diagnostic process that progresses from coarse to fine-grained. To address the first issue, we construct the CT-RATE-VQA dataset, which has 84K QA pairs. For the second issue, we propose MedReason-R1, a medical VLM with explicit reasoning process for disease diagnosis. MedReason-R1 incorporates a novel strategy that embeds zoom-in disease region-of-interest areas into the image, highlighting the crucial role of both global localization and disease-specific details in enhancing the model's diagnostic performance. Furthermore, we introduce the GRPO reinforcement learning framework to MedReason-R1, which enables effective reasoning without relying on costly manual annotations. Compared to recent general-purpose and medical VLMs, MedReason-R1 achieves state-of-the-art performance in CT disease diagnosis while retaining generalization. The code, checkpoints, and dataset are available at: https://github.com/Leevan001/MedReason-R1
Problem

Research questions and friction points this paper is trying to address.

Enhancing medical VLM diagnostic accuracy for CT images
Addressing coarse-to-fine diagnostic process neglect in medical imaging
Reducing reliance on costly manual annotations through reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Embedding zoom-in disease areas into CT images
Using GRPO reinforcement learning for reasoning
Constructing CT-RATE-VQA dataset with 84K QA pairs
🔎 Similar Papers
No similar papers found.
Y
Yifan Li
School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, 230026, P.R. China; Center for Medical Imaging, Robotics, and Analytic Computing & LEarning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou 215123, P.R. China; Jiangsu Provincial Key Laboratory of Multimodal Digital Twin Technology, Suzhou Jiangsu, 215123
Fenghe Tang
Fenghe Tang
University of Science and Technology of China
Medical Image AnalysisFoundation model
Yingtai Li
Yingtai Li
University of Science & Technology of China
Shaohua Kevin Zhou
Shaohua Kevin Zhou
Professor, USTC, FAIMBE, FIAMBE, FIEEE, FMICCAI, FNAI
Medical Image ComputingComputer Vision & Pattern RecognitionMachine & Deep Learning