🤖 AI Summary
Existing surgical instrument benchmarks support only category-level segmentation, which is insufficient for clinical applications requiring precise localization of specific tool instances based on function, spatial relationships, or anatomical interactions. This work proposes the first language-guided benchmark for instance-level surgical tool localization, introducing a novel language-conditioned instance segmentation task that spans multiple surgical procedures, imaging modalities, and complex operative scenarios. By pairing natural language descriptions with images and incorporating both bounding box and point-level anchor annotations, the benchmark jointly evaluates vision–language models’ capabilities in referential grounding and pixel-level localization within multi-instrument settings. Experiments reveal that current state-of-the-art models perform poorly on this task, underscoring the urgent need for surgical AI systems to develop robust visual–language reasoning grounded in clinical context.
📝 Abstract
Clinically reliable perception of surgical scenes is essential for advancing intelligent, context-aware intraoperative assistance such as instrument handoff guidance, collision avoidance, and workflow-aware robotic support. Existing surgical tool benchmarks primarily evaluate category-level segmentation, requiring models to detect all instances of predefined instrument classes. However, real-world clinical decisions often require resolving references to a specific instrument instance based on its functional role, spatial relation, or anatomical interaction capabilities not captured by current evaluation paradigms. We introduce GroundedSurg, the first language-conditioned, instance-level surgical grounding benchmark. Each instance pairs a surgical image with a natural-language description targeting a single instrument, accompanied by structured spatial grounding annotations including bounding boxes and point-level anchors. The dataset spans ophthalmic, laparoscopic, robotic, and open procedures, encompassing diverse instrument types, imaging conditions, and operative complexities. By jointly evaluating linguistic reference resolution and pixel-level localization, GroundedSurg enables a systematic and realistic evaluation of vision-language models in clinically realistic multi-instrument scenes. Extensive experiments demonstrate substantial performance gaps across modern segmentation and VLMs, highlighting the urgent need for clinically grounded vision-language reasoning in surgical AI systems. Code and data are publicly available at https://github.com/gaash-lab/GroundedSurg