π€ AI Summary
Current surgical AI systems are often limited to single-modality perception, struggling to comprehensively capture the complex interplay among instruments, actions, and contextual cues in operating rooms. This work presents the first systematic evaluation of state-of-the-art vision-language models (VLMs)βincluding Qwen2.5, LLaVA-1.5, and InternVL-3.5βfor zero-shot and parameter-efficient fine-tuned (via LoRA) surgical instrument detection, benchmarked against the open-vocabulary detection baseline Grounding DINO. Results demonstrate that Qwen2.5 consistently achieves superior performance in both zero-shot and fine-tuned settings, exhibiting stronger instrument recognition accuracy and generalization capability, whereas Grounding DINO excels in localization precision. This study establishes a new paradigm and empirical foundation for multimodal perception in surgical AI.
π Abstract
Surgery is a highly complex process, and artificial intelligence has emerged as a transformative force in supporting surgical guidance and decision-making. However, the unimodal nature of most current AI systems limits their ability to achieve a holistic understanding of surgical workflows. This highlights the need for general-purpose surgical AI systems capable of comprehensively modeling the interrelated components of surgical scenes. Recent advances in large vision-language models that integrate multimodal data processing offer strong potential for modeling surgical tasks and providing human-like scene reasoning and understanding. Despite their promise, systematic investigations of VLMs in surgical applications remain limited. In this study, we evaluate the effectiveness of large VLMs for the fundamental surgical vision task of detecting surgical tools. Specifically, we investigate three state-of-the-art VLMs, Qwen2.5, LLaVA1.5, and InternVL3.5, on the GraSP robotic surgery dataset under both zero-shot and parameter-efficient LoRA fine-tuning settings. Our results demonstrate that Qwen2.5 consistently achieves superior detection performance in both configurations among the evaluated VLMs. Furthermore, compared with the open-set detection baseline Grounding DINO, Qwen2.5 exhibits stronger zero-shot generalization and comparable fine-tuned performance. Notably, Qwen2.5 shows superior instrument recognition, while Grounding DINO demonstrates stronger localization.