Bridging Vision and Language for Robust Context-Aware Surgical Point Tracking: The VL-SurgPT Dataset and Benchmark

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Surgical smoke, specular reflections, and tissue deformation severely degrade the robustness of intraoperative point tracking, while existing datasets lack semantic context to diagnose failure modes. To address this, we introduce VL-SurgPT—the first vision-language surgical point tracking dataset—incorporating fine-grained textual descriptions of keypoint states (e.g., “partially occluded by smoke”, “distorted by mirror-like reflection”) to explicitly encode failure semantics. We further propose TG-SurgPT, a text-guided, context-aware tracking paradigm that dynamically modulates visual features using linguistic priors. Evaluating eight state-of-the-art trackers on VL-SurgPT, our method achieves a 12.7% improvement in mean Percentage of Correct Keypoints (mPCK) under smoke and reflection degradation, significantly outperforming vision-only baselines. This demonstrates the efficacy of multimodal semantic guidance in enhancing tracking robustness for surgical navigation.

Technology Category

Application Category

📝 Abstract
Accurate point tracking in surgical environments remains challenging due to complex visual conditions, including smoke occlusion, specular reflections, and tissue deformation. While existing surgical tracking datasets provide coordinate information, they lack the semantic context necessary to understand tracking failure mechanisms. We introduce VL-SurgPT, the first large-scale multimodal dataset that bridges visual tracking with textual descriptions of point status in surgical scenes. The dataset comprises 908 in vivo video clips, including 754 for tissue tracking (17,171 annotated points across five challenging scenarios) and 154 for instrument tracking (covering seven instrument types with detailed keypoint annotations). We establish comprehensive benchmarks using eight state-of-the-art tracking methods and propose TG-SurgPT, a text-guided tracking approach that leverages semantic descriptions to improve robustness in visually challenging conditions. Experimental results demonstrate that incorporating point status information significantly improves tracking accuracy and reliability, particularly in adverse visual scenarios where conventional vision-only methods struggle. By bridging visual and linguistic modalities, VL-SurgPT enables the development of context-aware tracking systems crucial for advancing computer-assisted surgery applications that can maintain performance even under challenging intraoperative conditions.
Problem

Research questions and friction points this paper is trying to address.

Tracking surgical points accurately under complex visual conditions like smoke and tissue deformation
Existing datasets lack semantic context to understand tracking failure mechanisms
Developing context-aware tracking systems that maintain performance in challenging surgical scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging semantic descriptions to enhance tracking robustness
Introducing multimodal dataset with visual and textual annotations
Proposing text-guided approach for adverse surgical conditions
🔎 Similar Papers
No similar papers found.
Rulin Zhou
Rulin Zhou
The Chinese University of Hong Kong Shenzhen Research Institute
Deep LearningMedical Image Processing
W
Wenlong He
Shenzhen University
A
An Wang
The Chinese University of Hong Kong
J
Jianhang Zhang
Shenzhen University
X
Xuanhui Zeng
Shenzhen University
X
Xi Zhang
Shenzhen University
C
Chaowei Zhu
Division of Gastrointestinal Surgery, Shenzhen People’s Hospital
H
Haijun Hu
Division of Gastrointestinal Surgery, Shenzhen People’s Hospital
Hongliang Ren
Hongliang Ren
Chinese University of Hong Kong | National University of Singapore | JHU/Harvard(RF) | CUHK(PhD)
Biorobotics & intelligent systemsmedical mechatronicscontinuumsoft flexible robots/sensorsmultisensory perception