SurgiPose: Estimating Surgical Tool Kinematics from Monocular Video for Surgical Robot Learning

📅 2025-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of insufficient ground-truth kinematic annotations in monocular surgical videos, which hinders imitation learning (IL) for surgical robots. We propose an end-to-end differentiable rendering–based pose estimation method that jointly optimizes 3D instrument trajectories and joint angles—the first application of differentiable rendering to monocular surgical instrument pose estimation. Our approach requires no ground-truth kinematic labels, operating solely on standard clinical video frame sequences, and is integrated with the dVRK Si robotic platform. Evaluated on tissue lifting and needle pickup tasks, IL policies trained on estimated kinematics achieve success rates comparable to those trained on ground-truth supervision. This demonstrates the feasibility of leveraging large-scale, uncurated intraoperative videos for autonomous robot learning. The core contribution is overcoming the bottleneck that clinical video data cannot be directly used for IL training, establishing a new paradigm for low-cost, scalable surgical robot learning.

Technology Category

Application Category

📝 Abstract
Imitation learning (IL) has shown immense promise in enabling autonomous dexterous manipulation, including learning surgical tasks. To fully unlock the potential of IL for surgery, access to clinical datasets is needed, which unfortunately lack the kinematic data required for current IL approaches. A promising source of large-scale surgical demonstrations is monocular surgical videos available online, making monocular pose estimation a crucial step toward enabling large-scale robot learning. Toward this end, we propose SurgiPose, a differentiable rendering based approach to estimate kinematic information from monocular surgical videos, eliminating the need for direct access to ground truth kinematics. Our method infers tool trajectories and joint angles by optimizing tool pose parameters to minimize the discrepancy between rendered and real images. To evaluate the effectiveness of our approach, we conduct experiments on two robotic surgical tasks: tissue lifting and needle pickup, using the da Vinci Research Kit Si (dVRK Si). We train imitation learning policies with both ground truth measured kinematics and estimated kinematics from video and compare their performance. Our results show that policies trained on estimated kinematics achieve comparable success rates to those trained on ground truth data, demonstrating the feasibility of using monocular video based kinematic estimation for surgical robot learning. By enabling kinematic estimation from monocular surgical videos, our work lays the foundation for large scale learning of autonomous surgical policies from online surgical data.
Problem

Research questions and friction points this paper is trying to address.

Estimating surgical tool kinematics from monocular videos
Enabling imitation learning without ground truth kinematic data
Using video-based estimation for surgical robot policy training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable rendering estimates tool kinematics from video
Optimizes pose parameters to match rendered and real images
Enables imitation learning without ground truth kinematic data
🔎 Similar Papers
No similar papers found.