🤖 AI Summary
To address severe class imbalance—particularly for rare categories—in surgical video datasets, which critically hinders model performance, this paper proposes a two-stage text-guided video diffusion generation framework. We introduce a novel spatio-temporal disentangled latent diffusion architecture that decouples 2D spatial modeling from temporal attention, enabling high-fidelity, temporally coherent synthesis of rare-class surgical videos under textual conditioning. Furthermore, we incorporate a semantic consistency–based rejection sampling strategy to enhance generation quality and improve fine-grained category controllability. Evaluated on surgical action recognition and intraoperative event prediction tasks, our method achieves substantial improvements—averaging +5.2% accuracy and +7.8% F1-score—effectively mitigating dataset bias. The implementation is publicly available.
📝 Abstract
Computer-assisted interventions can improve intra-operative guidance, particularly through deep learning methods that harness the spatiotemporal information in surgical videos. However, the severe data imbalance often found in surgical video datasets hinders the development of high-performing models. In this work, we aim to overcome the data imbalance by synthesizing surgical videos. We propose a unique two-stage, text-conditioned diffusion-based method to generate high-fidelity surgical videos for under-represented classes. Our approach conditions the generation process on text prompts and decouples spatial and temporal modeling by utilizing a 2D latent diffusion model to capture spatial content and then integrating temporal attention layers to ensure temporal consistency. Furthermore, we introduce a rejection sampling strategy to select the most suitable synthetic samples, effectively augmenting existing datasets to address class imbalance. We evaluate our method on two downstream tasks-surgical action recognition and intra-operative event prediction-demonstrating that incorporating synthetic videos from our approach substantially enhances model performance. We open-source our implementation at https://gitlab.com/nct_tso_public/surgvgen.