Towards Suturing World Models: Learning Predictive Models for Robotic Surgical Tasks

📅 2025-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of spatiotemporal dynamic modeling for laparoscopic suturing procedures. We propose the first video diffusion generative framework specifically designed for world modeling in minimally invasive suturing. Methodologically, we adapt LTX-Video and HunyuanVideo—previously developed for general video generation—to surgical motion synthesis, employing both LoRA-efficient fine-tuning and full-model optimization to model fine-grained needle-holder maneuvers (e.g., positioning, penetration, withdrawal) in high-resolution (≥768×512) and long-duration (≥49-frame) sequences. We introduce a novel annotation scheme distinguishing ideal versus suboptimal execution, endowing the model with intrinsic suturing quality discrimination capability. Evaluated on 2K simulated surgical clips, our framework achieves high-fidelity motion generation and demonstrates statistically significant separability of operation quality. This work delivers the first deployable generative foundation model for surgical simulators, objective skill assessment, and autonomous suturing systems.

Technology Category

Application Category

📝 Abstract
We introduce specialized diffusion-based generative models that capture the spatiotemporal dynamics of fine-grained robotic surgical sub-stitch actions through supervised learning on annotated laparoscopic surgery footage. The proposed models form a foundation for data-driven world models capable of simulating the biomechanical interactions and procedural dynamics of surgical suturing with high temporal fidelity. Annotating a dataset of $sim2K$ clips extracted from simulation videos, we categorize surgical actions into fine-grained sub-stitch classes including ideal and non-ideal executions of needle positioning, targeting, driving, and withdrawal. We fine-tune two state-of-the-art video diffusion models, LTX-Video and HunyuanVideo, to generate high-fidelity surgical action sequences at $ge$768x512 resolution and $ge$49 frames. For training our models, we explore both Low-Rank Adaptation (LoRA) and full-model fine-tuning approaches. Our experimental results demonstrate that these world models can effectively capture the dynamics of suturing, potentially enabling improved training simulators, surgical skill assessment tools, and autonomous surgical systems. The models also display the capability to differentiate between ideal and non-ideal technique execution, providing a foundation for building surgical training and evaluation systems. We release our models for testing and as a foundation for future research. Project Page: https://mkturkcan.github.io/suturingmodels/
Problem

Research questions and friction points this paper is trying to address.

Develop generative models for robotic surgical suturing dynamics.
Simulate biomechanical interactions in surgical suturing with high fidelity.
Differentiate ideal and non-ideal surgical techniques for training.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion-based generative models for surgical dynamics
Fine-grained sub-stitch action classification in suturing
High-fidelity video generation for surgical training
M
Mehmet Kerem Turkcan
Columbia University, New York, NY , USA
M
Mattia Ballo
Northwell Health, Lenox Hill Hospital, New Hyde Park, NY , USA
F
Filippo Filicori
Northwell Health, Lenox Hill Hospital, New Hyde Park, NY , USA
Zoran Kostic
Zoran Kostic
Professor of Electrical Engineering, Columbia University
AISigProcIoT