MedVR: Annotation-Free Medical Visual Reasoning via Agentic Reinforcement Learning

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of current medical vision-language models, which rely on purely text-based reasoning paradigms and struggle to perform fine-grained analysis grounded in visual evidence, often leading to visual hallucinations. To overcome this, the authors propose a reinforcement learning framework that requires no human-annotated intermediate reasoning steps. The approach leverages entropy-guided visual relocalization (EVR) and consensus-based credit assignment (CCA) to enable unsupervised visual reasoning. By integrating agent-based reinforcement learning, uncertainty-guided exploration, and multi-trajectory consensus for pseudo-supervision, the method achieves substantial performance gains over existing approaches on multiple public medical visual question answering benchmarks, while significantly enhancing model robustness and interpretability.
📝 Abstract
Medical Vision-Language Models (VLMs) hold immense promise for complex clinical tasks, but their reasoning capabilities are often constrained by text-only paradigms that fail to ground inferences in visual evidence. This limitation not only curtails performance on tasks requiring fine-grained visual analysis but also introduces risks of visual hallucination in safety-critical applications. Thus, we introduce MedVR, a novel reinforcement learning framework that enables annotation-free visual reasoning for medical VLMs. Its core innovation lies in two synergistic mechanisms: Entropy-guided Visual Regrounding (EVR) uses model uncertainty to direct exploration, while Consensus-based Credit Assignment (CCA) distills pseudo-supervision from rollout agreement. Without any human annotations for intermediate steps, MedVR achieves state-of-the-art performance on diverse public medical VQA benchmarks, significantly outperforming existing models. By learning to reason directly with visual evidence, MedVR promotes the robustness and transparency essential for accelerating the clinical deployment of medical AI.
Problem

Research questions and friction points this paper is trying to address.

Medical Vision-Language Models
Visual Reasoning
Visual Hallucination
Fine-grained Visual Analysis
Clinical Deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

annotation-free
visual reasoning
reinforcement learning
medical VLMs
pseudo-supervision
Z
Zheng Jiang
1Tsinghua University; 2DAMO Academy, Alibaba Group
Heng Guo
Heng Guo
Alibaba
Chengyu Fang
Chengyu Fang
Tsinghua University & Alibaba DAMO Academy
Computer VisionMedical AIEfficient MLLM
C
Changchen Xiao
5Zhejiang University
X
Xinyang Hu
5Zhejiang University
L
Lifeng Sun
1Tsinghua University; 4Key Laboratory of Pervasive Computing, Ministry of Education
Minfeng Xu
Minfeng Xu
Alibaba