🤖 AI Summary
This work addresses key challenges in surgical AI and simulation—namely data scarcity, difficulty in synthesizing rare events, and the sim-to-real gap—by proposing a lightweight, multimodal conditional video diffusion model. Unlike existing approaches that often rely on costly annotations and suffer from poor temporal consistency or realism, our method introduces tool-tip trajectories as a geometric prior, ensuring anatomically plausible motion without requiring depth information during inference. Using only a language prompt, a reference image, a tissue manipulability mask, and trajectory signals, the model generates high-fidelity, temporally coherent surgical action videos. After fine-tuning on a newly curated dataset of 12,044 laparoscopic videos, the approach achieves state-of-the-art performance with a CD-FVD score of 199.19 and significantly improves rare-action recognition (e.g., clipping F1 score increases from 20.93% to 43.14%), enabling high-quality surgical simulation.
📝 Abstract
A surgical world model capable of generating realistic surgical action videos with precise control over tool-tissue interactions can address fundamental challenges in surgical AI and simulation -- from data scarcity and rare event synthesis to bridging the sim-to-real gap for surgical automation. However, current video generation methods, the very core of such surgical world models, require expensive annotations or complex structured intermediates as conditioning signals at inference, limiting their scalability. Other approaches exhibit limited temporal consistency across complex laparoscopic scenes and do not possess sufficient realism. We propose Surgical Action World (SAW) -- a step toward surgical action world modeling through video diffusion conditioned on four lightweight signals: language prompts encoding tool-action context, a reference surgical scene, tissue affordance mask, and 2D tool-tip trajectories. We design a conditional video diffusion approach that reformulates video-to-video diffusion into trajectory-conditioned surgical action synthesis. The backbone diffusion model is fine-tuned on a custom-curated dataset of 12,044 laparoscopic clips with lightweight spatiotemporal conditioning signals, leveraging a depth consistency loss to enforce geometric plausibility without requiring depth at inference. SAW achieves state-of-the-art temporal consistency (CD-FVD: 199.19 vs. 546.82) and strong visual quality on held-out test data. Furthermore, we demonstrate its downstream utility for (a) surgical AI, where augmenting rare actions with SAW-generated videos improves action recognition (clipping F1-score: 20.93% to 43.14%; cutting: 0.00% to 8.33%) on real test data, and (b) surgical simulation, where rendering tool-tissue interaction videos from simulator-derived trajectory points toward a visually faithful simulation engine.