🤖 AI Summary
Existing Surgical-VQLA models exhibit weak reasoning capabilities and poor interpretability in surgical scenarios, hindering clinical deployment. To address this, we propose Surgery-R1—the first multimodal large language model integrating Chain-of-Thought (CoT) reasoning with reinforcement learning for surgical visual question-answering and localization. We construct a dedicated dataset, Surgery-R1-54k, and design a multimodal consistency reward mechanism to suppress localization hallucinations. Our training paradigm comprises two stages: supervised fine-tuning and reinforcement fine-tuning, leveraging three complementary data modalities—vision-questions, localization-questions, and CoT-annotated reasoning traces. On the Surgical-VQLA benchmark, Surgery-R1 achieves significant improvements in both reasoning accuracy and answer interpretability, demonstrating enhanced clinical applicability and technical advancement over prior approaches.
📝 Abstract
In recent years, significant progress has been made in the field of surgical scene understanding, particularly in the task of Visual Question Localized-Answering in robotic surgery (Surgical-VQLA). However, existing Surgical-VQLA models lack deep reasoning capabilities and interpretability in surgical scenes, which limits their reliability and potential for development in clinical applications. To address this issue, inspired by the development of Reasoning Multimodal Large Language Models (MLLMs), we first build the Surgery-R1-54k dataset, including paired data for Visual-QA, Grounding-QA, and Chain-of-Thought (CoT). Then, we propose the first Reasoning MLLM for Surgical-VQLA (Surgery-R1). In our Surgery-R1, we design a two-stage fine-tuning mechanism to enable the basic MLLM with complex reasoning abilities by utilizing supervised fine-tuning (SFT) and reinforcement fine-tuning (RFT). Furthermore, for an efficient and high-quality rule-based reward system in our RFT, we design a Multimodal Coherence reward mechanism to mitigate positional illusions that may arise in surgical scenarios. Experiment results demonstrate that Surgery-R1 outperforms other existing state-of-the-art (SOTA) models in the Surgical-VQLA task and widely-used MLLMs, while also validating its reasoning capabilities and the effectiveness of our approach. The code and dataset will be organized in https://github.com/FiFi-HAO467/Surgery-R1.