Video-based Surgical Tool-tip and Keypoint Tracking using Multi-frame Context-driven Deep Learning Models

📅 2025-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Accurate temporal tracking of surgical instrument tips and critical anatomical landmarks in robotic surgery videos remains challenging due to motion blur, occlusion, and frame-to-frame inconsistency. Method: This paper proposes a multi-frame context-driven deep learning framework that overcomes the limitations of single-frame detection by explicitly modeling temporal continuity and spatiotemporal dependencies. Integrating CNNs and Transformers, the method employs a novel spatiotemporal attention mechanism to enable end-to-end multi-frame feature fusion and robust keypoint regression. Results: Evaluated on EndoVis 2015 and JIGSAWS, the framework achieves 90% detection accuracy, 5.27-pixel RMS error on EndoVis, and sub-4.2-pixel RMS error on JIGSAWS—significantly outperforming state-of-the-art single-frame approaches. This work is the first to systematically incorporate multi-frame contextual modeling into surgical instrument keypoint tracking, establishing a high-precision temporal localization foundation for clinical applications such as surgical skill assessment and dynamic safety zone delineation.

Technology Category

Application Category

📝 Abstract
Automated tracking of surgical tool keypoints in robotic surgery videos is an essential task for various downstream use cases such as skill assessment, expertise assessment, and the delineation of safety zones. In recent years, the explosion of deep learning for vision applications has led to many works in surgical instrument segmentation, while lesser focus has been on tracking specific tool keypoints, such as tool tips. In this work, we propose a novel, multi-frame context-driven deep learning framework to localize and track tool keypoints in surgical videos. We train and test our models on the annotated frames from the 2015 EndoVis Challenge dataset, resulting in state-of-the-art performance. By leveraging sophisticated deep learning models and multi-frame context, we achieve 90% keypoint detection accuracy and a localization RMS error of 5.27 pixels. Results on a self-annotated JIGSAWS dataset with more challenging scenarios also show that the proposed multi-frame models can accurately track tool-tip and tool-base keypoints, with ${<}4.2$-pixel RMS error overall. Such a framework paves the way for accurately tracking surgical instrument keypoints, enabling further downstream use cases. Project and dataset webpage: https://tinyurl.com/mfc-tracker
Problem

Research questions and friction points this paper is trying to address.

Robotics Surgery
Tool Tip Tracking
Skill Assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-frame Enhancement
Deep Learning
Surgical Tool Tracking
🔎 Similar Papers
No similar papers found.
Bhargav Ghanekar
Bhargav Ghanekar
PhD student, Rice University
Computational Imaging
L
Lianne R. Johnson
Rice University, Houston TX USA
J
Jacob L. Laughlin
Rice University, Houston TX USA
M
Marcia K. O'Malley
Rice University, Houston TX USA
Ashok Veeraraghavan
Ashok Veeraraghavan
Professor, ECE, Rice University
Computational ImagingComputational PhotographyComputer VisionmHealthBio-Imaging