Surgery-R1: Advancing Surgical-VQLA with Reasoning Multimodal Large Language Model via Reinforcement Learning

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Surgical-VQLA models exhibit weak reasoning capabilities and poor interpretability in surgical scenarios, hindering clinical deployment. To address this, we propose Surgery-R1—the first multimodal large language model integrating Chain-of-Thought (CoT) reasoning with reinforcement learning for surgical visual question-answering and localization. We construct a dedicated dataset, Surgery-R1-54k, and design a multimodal consistency reward mechanism to suppress localization hallucinations. Our training paradigm comprises two stages: supervised fine-tuning and reinforcement fine-tuning, leveraging three complementary data modalities—vision-questions, localization-questions, and CoT-annotated reasoning traces. On the Surgical-VQLA benchmark, Surgery-R1 achieves significant improvements in both reasoning accuracy and answer interpretability, demonstrating enhanced clinical applicability and technical advancement over prior approaches.

Technology Category

Application Category

📝 Abstract
In recent years, significant progress has been made in the field of surgical scene understanding, particularly in the task of Visual Question Localized-Answering in robotic surgery (Surgical-VQLA). However, existing Surgical-VQLA models lack deep reasoning capabilities and interpretability in surgical scenes, which limits their reliability and potential for development in clinical applications. To address this issue, inspired by the development of Reasoning Multimodal Large Language Models (MLLMs), we first build the Surgery-R1-54k dataset, including paired data for Visual-QA, Grounding-QA, and Chain-of-Thought (CoT). Then, we propose the first Reasoning MLLM for Surgical-VQLA (Surgery-R1). In our Surgery-R1, we design a two-stage fine-tuning mechanism to enable the basic MLLM with complex reasoning abilities by utilizing supervised fine-tuning (SFT) and reinforcement fine-tuning (RFT). Furthermore, for an efficient and high-quality rule-based reward system in our RFT, we design a Multimodal Coherence reward mechanism to mitigate positional illusions that may arise in surgical scenarios. Experiment results demonstrate that Surgery-R1 outperforms other existing state-of-the-art (SOTA) models in the Surgical-VQLA task and widely-used MLLMs, while also validating its reasoning capabilities and the effectiveness of our approach. The code and dataset will be organized in https://github.com/FiFi-HAO467/Surgery-R1.
Problem

Research questions and friction points this paper is trying to address.

Enhancing surgical scene understanding with reasoning capabilities
Addressing lack of interpretability in Surgical-VQLA models
Improving reliability for clinical applications via MLLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage fine-tuning with SFT and RFT
Multimodal Coherence reward mechanism
Surgery-R1-54k dataset for diverse QA tasks
🔎 Similar Papers
No similar papers found.
P
Pengfei Hao
Hong Kong University of Science and Technology (Guangzhou), China
Shuaibo Li
Shuaibo Li
The Hong Kong University of Science and Technology (Guangzhou)
Computer VisionMedia ForensicsGenerative ModelMedical Image Analysis
Hongqiu Wang
Hongqiu Wang
Hong Kong University of Science and Technology (Guangzhou)
AI for healthcareLabel-efficient learningMulti-modal learningFairnessMLLM
Z
Zhizhuo Kou
Hong Kong University of Science and Technology, Hong Kong SAR
J
Junhang Zhang
Department of Thoracic Surgery, the Seventh Affiliated Hospital, Sun Yat-sen University, China
G
Guang Yang
Bioengineering/Imperial-X, Imperial College London, UK
L
Lei Zhu
ROAS Thrust, Hong Kong University of Science and Technology (Guangzhou), China, and Department of Electronic and Computer Engineering, Hong Kong SAR, China