SurgPose: a Dataset for Articulated Robotic Surgical Tool Pose Estimation and Tracking

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the accuracy and efficiency bottlenecks in 6D pose estimation of surgical instruments—stemming from high calibration costs and scarcity of public benchmarks—this paper introduces SurgPose: the first large-scale surgical instrument pose estimation dataset featuring instance-level semantic keypoints and native stereo support. We propose a novel UV-fluorescent marker-based annotation method enabling non-intrusive, high-precision keypoint labeling. SurgPose comprises 120K instrument instances across six categories, each annotated with seven semantic keypoints; reliable 3D pose ground truth is generated via synchronized stereo video and 2D-to-3D lifting. The dataset contains 80K training and 40K validation samples, accompanied by standardized benchmarking protocols using models such as HRNet. Experiments demonstrate significant improvements in both keypoint detection and 6D pose estimation performance, establishing SurgPose as a foundational resource for augmented reality–guided navigation and learning-based autonomous surgical manipulation.

Technology Category

Application Category

📝 Abstract
Accurate and efficient surgical robotic tool pose estimation is of fundamental significance to downstream applications such as augmented reality (AR) in surgical training and learning-based autonomous manipulation. While significant advancements have been made in pose estimation for humans and animals, it is still a challenge in surgical robotics due to the scarcity of published data. The relatively large absolute error of the da Vinci end effector kinematics and arduous calibration procedure make calibrated kinematics data collection expensive. Driven by this limitation, we collected a dataset, dubbed SurgPose, providing instance-aware semantic keypoints and skeletons for visual surgical tool pose estimation and tracking. By marking keypoints using ultraviolet (UV) reactive paint, which is invisible under white light and fluorescent under UV light, we execute the same trajectory under different lighting conditions to collect raw videos and keypoint annotations, respectively. The SurgPose dataset consists of approximately 120k surgical instrument instances (80k for training and 40k for validation) of 6 categories. Each instrument instance is labeled with 7 semantic keypoints. Since the videos are collected in stereo pairs, the 2D pose can be lifted to 3D based on stereo-matching depth. In addition to releasing the dataset, we test a few baseline approaches to surgical instrument tracking to demonstrate the utility of SurgPose. More details can be found at surgpose.github.io.
Problem

Research questions and friction points this paper is trying to address.

Estimating robotic surgical tool pose accurately.
Addressing data scarcity in surgical robotics.
Enhancing surgical training with augmented reality.
Innovation

Methods, ideas, or system contributions that make the work stand out.

UV reactive paint labeling
Stereo-matching depth estimation
Instance-aware semantic keypoints
🔎 Similar Papers
No similar papers found.
Z
Zijian Wu
Robotics and Control Laboratory (RCL), the University of British Columbia, Vancouver, Canada
A
Adam Schmidt
Intuitive Surgical, Sunnyvale, USA
Randy Moore
Randy Moore
University of British Columbia
Robotics
H
Haoying Zhou
Worcester Polytechnic Institute, Worcester, USA
Alexandre Banks
Alexandre Banks
Research Assistant, University of Oxford
medical roboticsultrasoundmachine learningcomputer vision
Peter Kazanzides
Peter Kazanzides
Johns Hopkins University
roboticsmedical roboticssoftware engineering
S
Septimiu E. Salcudean
Robotics and Control Laboratory (RCL), the University of British Columbia, Vancouver, Canada