Setup-Invariant Augmented Reality for Teaching by Demonstration with Surgical Robots

πŸ“… 2025-04-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current AR-based surgical training systems require expert supervision and homogeneous robot configurations, hindering unsupervised, in-situ practice outside the operating room. To address this, we propose dV-STEARβ€”an open-source system enabling mentor-free, pose-agnostic AR demonstration playback for heterogeneous da Vinci platforms. Our method integrates DRK-based joint estimation, rigid-body registration, and task-aligned AR rendering to reconstruct 3D procedural guidance without requiring initial alignment between instructor and trainee robot poses. The system achieves a mean registration error of 3.86 Β± 2.01 mm. A user study (N=24) demonstrates statistically significant improvements: reduced ring-transfer task completion time (p=0.03), fewer collisions (p=0.01), higher grasp-and-place success rate (p=0.004), enhanced bimanual coordination, and markedly lower perceived frustration. This work establishes the first framework for scalable, hardware-agnostic AR surgical skill transfer across diverse teleoperated robotic platforms.

Technology Category

Application Category

πŸ“ Abstract
Augmented reality (AR) is an effective tool in robotic surgery education as it combines exploratory learning with three-dimensional guidance. However, existing AR systems require expert supervision and do not account for differences in the mentor and mentee robot configurations. To enable novices to train outside the operating room while receiving expert-informed guidance, we present dV-STEAR: an open-source system that plays back task-aligned expert demonstrations without assuming identical setup joint positions between expert and novice. Pose estimation was rigorously quantified, showing a registration error of 3.86 (SD=2.01)mm. In a user study (N=24), dV-STEAR significantly improved novice performance on tasks from the Fundamentals of Laparoscopic Surgery. In a single-handed ring-over-wire task, dV-STEAR increased completion speed (p=0.03) and reduced collision time (p=0.01) compared to dry-lab training alone. During a pick-and-place task, it improved success rates (p=0.004). Across both tasks, participants using dV-STEAR exhibited significantly more balanced hand use and reported lower frustration levels. This work presents a novel educational tool implemented on the da Vinci Research Kit, demonstrates its effectiveness in teaching novices, and builds the foundation for further AR integration into robot-assisted surgery.
Problem

Research questions and friction points this paper is trying to address.

Enables AR-guided surgical training without identical robot setups
Reduces expert supervision needs for novice surgical practice
Improves laparoscopic task performance metrics for trainees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Setup-invariant AR for surgical robot training
Open-source expert demonstration playback system
Pose estimation with low registration error
πŸ”Ž Similar Papers
No similar papers found.
Alexandre Banks
Alexandre Banks
Research Assistant, University of Oxford
medical roboticsultrasoundmachine learningcomputer vision
R
Richard Cook
UBC Department of Surgery, University of British Columbia (UBC), Vancouver, BC V6T 1Z4, Canada
S
S. Salcudean
UBC Electrical and Computer Engineering Department, University of British Columbia (UBC), Vancouver, BC V6T 1Z4, Canada