🤖 AI Summary
Existing multimodal large language models (MLLMs) lack fine-grained, clinically validated benchmarks for surgical action planning (SAP), hindering rigorous evaluation of their capacity for atomic-level visual action recognition and long-horizon procedural reasoning. Method: We introduce SAP-Bench—the first dedicated multimodal benchmark for SAP—built upon 74 laparoscopic cholecystectomy videos, comprising 1,226 clinically verified action segments and 1,152 temporally aligned “current frame → next action” multimodal samples. We propose a novel dual-dimension evaluation paradigm (“atomic action recognition + long-horizon procedural coordination”) and design MLLM-SAP, a surgical-knowledge-infused inference framework. Contribution/Results: Unified evaluation across seven state-of-the-art MLLMs reveals an average next-action prediction accuracy of only 41.7%, exposing critical deficiencies in fine-grained visual understanding and clinical logic modeling. SAP-Bench establishes a foundational, trustworthy assessment standard for medical AI.
📝 Abstract
Effective evaluation is critical for driving advancements in MLLM research. The surgical action planning (SAP) task, which aims to generate future action sequences from visual inputs, demands precise and sophisticated analytical capabilities. Unlike mathematical reasoning, surgical decision-making operates in life-critical domains and requires meticulous, verifiable processes to ensure reliability and patient safety. This task demands the ability to distinguish between atomic visual actions and coordinate complex, long-horizon procedures, capabilities that are inadequately evaluated by current benchmarks. To address this gap, we introduce SAP-Bench, a large-scale, high-quality dataset designed to enable multimodal large language models (MLLMs) to perform interpretable surgical action planning. Our SAP-Bench benchmark, derived from the cholecystectomy procedures context with the mean duration of 1137.5s, and introduces temporally-grounded surgical action annotations, comprising the 1,226 clinically validated action clips (mean duration: 68.7s) capturing five fundamental surgical actions across 74 procedures. The dataset provides 1,152 strategically sampled current frames, each paired with the corresponding next action as multimodal analysis anchors. We propose the MLLM-SAP framework that leverages MLLMs to generate next action recommendations from the current surgical scene and natural language instructions, enhanced with injected surgical domain knowledge. To assess our dataset's effectiveness and the broader capabilities of current models, we evaluate seven state-of-the-art MLLMs (e.g., OpenAI-o1, GPT-4o, QwenVL2.5-72B, Claude-3.5-Sonnet, GeminiPro2.5, Step-1o, and GLM-4v) and reveal critical gaps in next action prediction performance.