🤖 AI Summary
This work addresses the limited ability of existing vision-language models to perform fine-grained, traceable evidence localization and verification in long clinical videos, which hinders interpretable clinical reasoning. To this end, we propose MedScope, the first framework to integrate tool-augmented reasoning into clinical video understanding through a coarse-to-fine, evidence-driven paradigm. MedScope iteratively retrieves, temporally localizes, and verifies critical visual evidence to support transparent and explainable decision-making. Our contributions include the introduction of ClinVideoSuite, a novel clinical video dataset; the design of Grounding-Aware Group Relative Policy Optimization (GA-GRPO), a reinforcement learning algorithm that explicitly incorporates grounding signals; and state-of-the-art performance across multiple benchmarks, significantly improving both accuracy and trustworthiness in both in-domain and out-of-domain clinical scenarios.
📝 Abstract
Long-form clinical videos are central to visual evidence-based decision-making, with growing importance for applications such as surgical robotics and related settings. However, current multimodal large language models typically process videos with passive sampling or weakly grounded inspection, which limits their ability to iteratively locate, verify, and justify predictions with temporally targeted evidence. To close this gap, we propose MedScope, a tool-using clinical video reasoning model that performs coarse-to-fine evidence seeking over long-form procedures. By interleaving intermediate reasoning with targeted tool calls and verification on retrieved observations, MedScope produces more accurate and trustworthy predictions that are explicitly grounded in temporally localized visual evidence. To address the lack of high-fidelity supervision, we build ClinVideoSuite, an evidence-centric, fine-grained clinical video suite. We then optimize MedScope with Grounding-Aware Group Relative Policy Optimization (GA-GRPO), which directly reinforces tool use with grounding-aligned rewards and evidence-weighted advantages. On full and fine-grained video understanding benchmarks, MedScope achieves state-of-the-art performance in both in-domain and out-of-domain evaluations. Our approach illuminates a path toward medical AI agents that can genuinely"think with videos"through tool-integrated reasoning. We will release our code, models, and data.