Evaluating Large Vision-language Models for Surgical Tool Detection

πŸ“… 2026-01-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current surgical AI systems are often limited to single-modality perception, struggling to comprehensively capture the complex interplay among instruments, actions, and contextual cues in operating rooms. This work presents the first systematic evaluation of state-of-the-art vision-language models (VLMs)β€”including Qwen2.5, LLaVA-1.5, and InternVL-3.5β€”for zero-shot and parameter-efficient fine-tuned (via LoRA) surgical instrument detection, benchmarked against the open-vocabulary detection baseline Grounding DINO. Results demonstrate that Qwen2.5 consistently achieves superior performance in both zero-shot and fine-tuned settings, exhibiting stronger instrument recognition accuracy and generalization capability, whereas Grounding DINO excels in localization precision. This study establishes a new paradigm and empirical foundation for multimodal perception in surgical AI.

Technology Category

Application Category

πŸ“ Abstract
Surgery is a highly complex process, and artificial intelligence has emerged as a transformative force in supporting surgical guidance and decision-making. However, the unimodal nature of most current AI systems limits their ability to achieve a holistic understanding of surgical workflows. This highlights the need for general-purpose surgical AI systems capable of comprehensively modeling the interrelated components of surgical scenes. Recent advances in large vision-language models that integrate multimodal data processing offer strong potential for modeling surgical tasks and providing human-like scene reasoning and understanding. Despite their promise, systematic investigations of VLMs in surgical applications remain limited. In this study, we evaluate the effectiveness of large VLMs for the fundamental surgical vision task of detecting surgical tools. Specifically, we investigate three state-of-the-art VLMs, Qwen2.5, LLaVA1.5, and InternVL3.5, on the GraSP robotic surgery dataset under both zero-shot and parameter-efficient LoRA fine-tuning settings. Our results demonstrate that Qwen2.5 consistently achieves superior detection performance in both configurations among the evaluated VLMs. Furthermore, compared with the open-set detection baseline Grounding DINO, Qwen2.5 exhibits stronger zero-shot generalization and comparable fine-tuned performance. Notably, Qwen2.5 shows superior instrument recognition, while Grounding DINO demonstrates stronger localization.
Problem

Research questions and friction points this paper is trying to address.

surgical tool detection
vision-language models
multimodal understanding
surgical AI
zero-shot generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

large vision-language models
surgical tool detection
zero-shot learning
parameter-efficient fine-tuning
multimodal surgical AI
πŸ”Ž Similar Papers
No similar papers found.
N
Nakul Poudel
Center for Imaging Science, Rochester Institute of Technology, Rochester, NY 14623
R
R. Simon
Biomedical Engineering, Rochester Institute of Technology, Rochester, NY 14623
Cristian A. Linte
Cristian A. Linte
Biomedical Engineering & Center for Imaging Science, Rochester Institute of Technology
Biomedical Imaging and Image ComputingBiomedical ModelingSimulation and VisualizationComputer-assisted Interventions