SAP-Bench: Benchmarking Multimodal Large Language Models in Surgical Action Planning

📅 2025-06-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language models (MLLMs) lack fine-grained, clinically validated benchmarks for surgical action planning (SAP), hindering rigorous evaluation of their capacity for atomic-level visual action recognition and long-horizon procedural reasoning. Method: We introduce SAP-Bench—the first dedicated multimodal benchmark for SAP—built upon 74 laparoscopic cholecystectomy videos, comprising 1,226 clinically verified action segments and 1,152 temporally aligned “current frame → next action” multimodal samples. We propose a novel dual-dimension evaluation paradigm (“atomic action recognition + long-horizon procedural coordination”) and design MLLM-SAP, a surgical-knowledge-infused inference framework. Contribution/Results: Unified evaluation across seven state-of-the-art MLLMs reveals an average next-action prediction accuracy of only 41.7%, exposing critical deficiencies in fine-grained visual understanding and clinical logic modeling. SAP-Bench establishes a foundational, trustworthy assessment standard for medical AI.

Technology Category

Application Category

📝 Abstract
Effective evaluation is critical for driving advancements in MLLM research. The surgical action planning (SAP) task, which aims to generate future action sequences from visual inputs, demands precise and sophisticated analytical capabilities. Unlike mathematical reasoning, surgical decision-making operates in life-critical domains and requires meticulous, verifiable processes to ensure reliability and patient safety. This task demands the ability to distinguish between atomic visual actions and coordinate complex, long-horizon procedures, capabilities that are inadequately evaluated by current benchmarks. To address this gap, we introduce SAP-Bench, a large-scale, high-quality dataset designed to enable multimodal large language models (MLLMs) to perform interpretable surgical action planning. Our SAP-Bench benchmark, derived from the cholecystectomy procedures context with the mean duration of 1137.5s, and introduces temporally-grounded surgical action annotations, comprising the 1,226 clinically validated action clips (mean duration: 68.7s) capturing five fundamental surgical actions across 74 procedures. The dataset provides 1,152 strategically sampled current frames, each paired with the corresponding next action as multimodal analysis anchors. We propose the MLLM-SAP framework that leverages MLLMs to generate next action recommendations from the current surgical scene and natural language instructions, enhanced with injected surgical domain knowledge. To assess our dataset's effectiveness and the broader capabilities of current models, we evaluate seven state-of-the-art MLLMs (e.g., OpenAI-o1, GPT-4o, QwenVL2.5-72B, Claude-3.5-Sonnet, GeminiPro2.5, Step-1o, and GLM-4v) and reveal critical gaps in next action prediction performance.
Problem

Research questions and friction points this paper is trying to address.

Evaluating MLLMs for surgical action planning accuracy
Addressing gaps in current surgical action benchmarks
Enhancing MLLMs with domain-specific surgical knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces SAP-Bench for surgical action planning
Leverages MLLMs with domain knowledge injection
Evaluates seven state-of-the-art MLLMs
🔎 Similar Papers
No similar papers found.
Mengya Xu
Mengya Xu
The Chinese University of Hong Kong
Vision-Language based Surgical Scene Understanding
Zhongzhen Huang
Zhongzhen Huang
Shanghai Jiao Tong University
Medical Image AnalysisVision and Language
D
Dillan Imans
Sungkyunkwan University, Seoul, South Korea
Y
Yiru Ye
The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
X
Xiaofan Zhang
Shanghai Jiao Tong University, Shanghai, China
Q
Qi Dou
The Chinese University of Hong Kong, Hong Kong SAR, China