MedGRPO: Multi-Task Reinforcement Learning for Heterogeneous Medical Video Understanding

📅 2025-12-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large vision-language models (VLMs) face three key challenges in medical video understanding: imprecise spatial localization, weak temporal reasoning, and difficulty modeling clinical semantics; moreover, existing reinforcement learning approaches suffer from training instability due to cross-dataset reward scale imbalance. To address these issues, we propose MedGRPO—a novel framework integrating supervised fine-tuning, expert-guided prompting, dual-model verification, and the Qwen2.5-VL-7B architecture. Our contributions include: (1) MedVidBench, the first large-scale, multi-source medical video benchmark; (2) a cross-dataset reward normalization mechanism to stabilize multi-task optimization; and (3) a comparative adjudicator model grounded in five clinically relevant dimensions to enhance generation quality. Experiments demonstrate that MedGRPO significantly outperforms GPT-4.1 and Gemini-2.5-Flash on MedVidBench, achieving substantial gains over supervised fine-tuning baselines in both localization and descriptive tasks—validating its effectiveness and robustness.

Technology Category

Application Category

📝 Abstract
Large vision-language models struggle with medical video understanding, where spatial precision, temporal reasoning, and clinical semantics are critical. To address this, we first introduce extbf{MedVidBench}, a large-scale benchmark of 531,850 video-instruction pairs across 8 medical sources spanning video, segment, and frame-level tasks, curated through a rigorous quality assurance pipeline with expert-guided prompting and dual-model validation. While supervised fine-tuning on MedVidBench yields noticeable gains, standard Reinforcement Learning (RL) fails due to imbalanced reward scales across datasets, which destabilizes optimization and leads to training collapse. To overcome this, we introduce extbf{MedGRPO}, a novel RL framework for balanced multi-dataset training with two key innovations: (1) emph{cross-dataset reward normalization} that maps each dataset's median performance to a common reward value, ensuring fair optimization regardless of difficulty, and (2) a emph{medical LLM judge} that evaluates caption quality on five clinical dimensions through comparative similarity scoring. Supervised fine-tuning Qwen2.5-VL-7B on MedVidBench substantially outperforms GPT-4.1 and Gemini-2.5-Flash across all tasks, demonstrating MedVidBench's efficacy, while our MedGRPO framework further improves upon the SFT baseline across grounding and captioning tasks. Our work establishes a foundational benchmark and robust training methodology for advancing vision-language models in medical domains. Our project website is available at https://yuhaosu.github.io/MedGRPO/.
Problem

Research questions and friction points this paper is trying to address.

Medical video understanding requires spatial precision, temporal reasoning, and clinical semantics.
Standard reinforcement learning fails due to imbalanced reward scales across datasets.
Vision-language models struggle with heterogeneous medical video tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-dataset reward normalization for balanced optimization
Medical LLM judge for clinical caption evaluation
Large-scale benchmark MedVidBench with expert validation
🔎 Similar Papers
2024-01-16Medical Image AnalysisCitations: 2