🤖 AI Summary
This work addresses the limitations of current medical vision-language models, which rely on purely text-based reasoning paradigms and struggle to perform fine-grained analysis grounded in visual evidence, often leading to visual hallucinations. To overcome this, the authors propose a reinforcement learning framework that requires no human-annotated intermediate reasoning steps. The approach leverages entropy-guided visual relocalization (EVR) and consensus-based credit assignment (CCA) to enable unsupervised visual reasoning. By integrating agent-based reinforcement learning, uncertainty-guided exploration, and multi-trajectory consensus for pseudo-supervision, the method achieves substantial performance gains over existing approaches on multiple public medical visual question answering benchmarks, while significantly enhancing model robustness and interpretability.
📝 Abstract
Medical Vision-Language Models (VLMs) hold immense promise for complex clinical tasks, but their reasoning capabilities are often constrained by text-only paradigms that fail to ground inferences in visual evidence. This limitation not only curtails performance on tasks requiring fine-grained visual analysis but also introduces risks of visual hallucination in safety-critical applications. Thus, we introduce MedVR, a novel reinforcement learning framework that enables annotation-free visual reasoning for medical VLMs. Its core innovation lies in two synergistic mechanisms: Entropy-guided Visual Regrounding (EVR) uses model uncertainty to direct exploration, while Consensus-based Credit Assignment (CCA) distills pseudo-supervision from rollout agreement. Without any human annotations for intermediate steps, MedVR achieves state-of-the-art performance on diverse public medical VQA benchmarks, significantly outperforming existing models. By learning to reason directly with visual evidence, MedVR promotes the robustness and transparency essential for accelerating the clinical deployment of medical AI.