Surgical Action Planning with Large Language Models

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In robot-assisted minimally invasive surgery, the lack of intraoperative prospective action planning hinders real-time decision support. Method: This paper formalizes the Surgical Action Planning (SAP) task—predicting future surgical steps from streaming video—and proposes an end-to-end vision-language planning framework. It introduces a Near-History Focused Memory Module (NHF-MM) and a Prompt Factory to jointly model instrument-action mappings and track procedural progress, enabling zero-shot inference and efficient LoRA-based fine-tuning while preserving data privacy and deployment efficiency. The approach integrates Qwen2-VL’s multimodal encoding, temporal memory mechanisms, and natural language prompt engineering. Results: Evaluated on the newly constructed CholecT50-SAP dataset, the Qwen2.5-72B-SFT model achieves a 19.3% absolute improvement in action prediction accuracy over baselines, providing the first empirical validation of large language models’ efficacy and generalizability in prospective surgical decision-making.

Technology Category

Application Category

📝 Abstract
In robot-assisted minimally invasive surgery, we introduce the Surgical Action Planning (SAP) task, which generates future action plans from visual inputs to address the absence of intraoperative predictive planning in current intelligent applications. SAP shows great potential for enhancing intraoperative guidance and automating procedures. However, it faces challenges such as understanding instrument-action relationships and tracking surgical progress. Large Language Models (LLMs) show promise in understanding surgical video content but remain underexplored for predictive decision-making in SAP, as they focus mainly on retrospective analysis. Challenges like data privacy, computational demands, and modality-specific constraints further highlight significant research gaps. To tackle these challenges, we introduce LLM-SAP, a Large Language Models-based Surgical Action Planning framework that predicts future actions and generates text responses by interpreting natural language prompts of surgical goals. The text responses potentially support surgical education, intraoperative decision-making, procedure documentation, and skill analysis. LLM-SAP integrates two novel modules: the Near-History Focus Memory Module (NHF-MM) for modeling historical states and the prompts factory for action planning. We evaluate LLM-SAP on our constructed CholecT50-SAP dataset using models like Qwen2.5 and Qwen2-VL, demonstrating its effectiveness in next-action prediction. Pre-trained LLMs are tested zero-shot, and supervised fine-tuning (SFT) with LoRA is implemented to address data privacy concerns. Our experiments show that Qwen2.5-72B-SFT surpasses Qwen2.5-72B with a 19.3% higher accuracy.
Problem

Research questions and friction points this paper is trying to address.

Predicting future surgical actions from visual inputs.
Understanding instrument-action relationships in surgery.
Addressing data privacy and computational challenges in SAP.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-SAP framework predicts future surgical actions
NHF-MM models historical states for planning
LoRA fine-tuning addresses data privacy concerns
🔎 Similar Papers
No similar papers found.
Mengya Xu
Mengya Xu
The Chinese University of Hong Kong
Vision-Language based Surgical Scene Understanding
Zhongzhen Huang
Zhongzhen Huang
Shanghai Jiao Tong University
Medical Image AnalysisVision and Language
J
Jie Zhang
Huazhong University of Science and Technology
X
Xiaofan Zhang
Shanghai Jiao Tong University, Shanghai AI Laboratory
Q
Qi Dou
The Chinese University of Hong Kong, Hong Kong SAR, China