SurgRIPE challenge: Benchmark of Surgical Robot Instrument Pose Estimation

📅 2025-01-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses two critical challenges in markerless surgical instrument pose estimation: low accuracy and the absence of standardized evaluation benchmarks. To this end, we introduce SurgVPS—the first open-source surgical video dataset featuring precise, frame-level 6-degree-of-freedom (6DoF) ground-truth annotations—and establish the first international standardized benchmark for markerless instrument pose estimation. Methodologically, our approach integrates multi-view geometric constraints, synthetic surgical video generation with accurate pose supervision, and an end-to-end deep learning framework jointly optimizing 6DoF pose regression and keypoint detection. The proposed method achieves significant improvements in both accuracy and cross-domain generalizability over existing baselines, demonstrating practical feasibility for real-world surgical robot deployment. SurgVPS serves as the official platform for the MICCAI 2023 EndoVis Challenge, enabling rigorous algorithm comparison and advancing vision-driven autonomous surgical systems.

Technology Category

Application Category

📝 Abstract
Accurate instrument pose estimation is a crucial step towards the future of robotic surgery, enabling applications such as autonomous surgical task execution. Vision-based methods for surgical instrument pose estimation provide a practical approach to tool tracking, but they often require markers to be attached to the instruments. Recently, more research has focused on the development of marker-less methods based on deep learning. However, acquiring realistic surgical data, with ground truth instrument poses, required for deep learning training, is challenging. To address the issues in surgical instrument pose estimation, we introduce the Surgical Robot Instrument Pose Estimation (SurgRIPE) challenge, hosted at the 26th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) in 2023. The objectives of this challenge are: (1) to provide the surgical vision community with realistic surgical video data paired with ground truth instrument poses, and (2) to establish a benchmark for evaluating markerless pose estimation methods. The challenge led to the development of several novel algorithms that showcased improved accuracy and robustness over existing methods. The performance evaluation study on the SurgRIPE dataset highlights the potential of these advanced algorithms to be integrated into robotic surgery systems, paving the way for more precise and autonomous surgical procedures. The SurgRIPE challenge has successfully established a new benchmark for the field, encouraging further research and development in surgical robot instrument pose estimation.
Problem

Research questions and friction points this paper is trying to address.

Algorithm Evaluation
Unsupervised Learning
Surgical Robot Precision
Innovation

Methods, ideas, or system contributions that make the work stand out.

SurgRIPE Competition
Tool Position Prediction
Unmarked Surgical Tools
🔎 Similar Papers
No similar papers found.
H
Haozheng Xu
The Hamlyn Centre for Robotic Surgery, Imperial College London, United Kingdom
Alistair Weld
Alistair Weld
Imperial College London
Computer VisionArtificial IntelligenceMachine LearningSignal ProcessingMedical Imaging
C
Chi Xu
The Hamlyn Centre for Robotic Surgery, Imperial College London, United Kingdom
Alfie Roddan
Alfie Roddan
Imperial College
João Cartucho
João Cartucho
PhD student, Imperial College
Computer VisionRoboticsMedia Art
Mert Asim Karaoglu
Mert Asim Karaoglu
PhD Candidate, TU Munich | Senior Research Engineer, ImFusion GmbH
Surgical Computer Vision3D VisionDeep Learning
A
A. Ladikos
ImFusion GmbH, Munich, Germany
Y
Yangke Li
The Hamlyn Centre for Robotic Surgery, Imperial College London, United Kingdom
Y
Yiping Li
Eindhoven University of Technology, Eindhoven, Netherlands
Daiyun Shen
Daiyun Shen
PhD at National University of Singapore
medical AI
S
Shoujie Yang
Department of Biomedical Engineering, National University of Singapore, Singapore
G
Geonhee Lee
Department of Transdisciplinary Medicine, Seoul National University Hospital, South Korea
S
Seyeon Park
Department of Transdisciplinary Medicine, Seoul National University Hospital, South Korea
J
Jongho Shin
Department of Transdisciplinary Medicine, Seoul National University Hospital, South Korea
Y
Young-Gon Kim
Department of Transdisciplinary Medicine, Seoul National University Hospital, South Korea
L
Lucy Fothergill
School of Computing, University of Leeds, United Kingdom
Dominic Jones
Dominic Jones
STORMLab, University of Leeds, United Kingdom
Pietro Valdastri
Pietro Valdastri
Professor of Robotics and Autonomous Systems, University of Leeds
Capsule RobotsRobotic SurgeryMagnetic Manipulation
Duygu Sarikaya
Duygu Sarikaya
University of Leeds, School of Computer Science
computer assisted surgerymedical image computingcomputer vision
S
Stamatia Giannarou
The Hamlyn Centre for Robotic Surgery, Imperial College London, United Kingdom