Mitigating Surgical Data Imbalance with Dual-Prediction Video Diffusion Model

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Rare action and instrument class instances in surgical videos lead to insufficient robustness of downstream models. To address this few-shot surgical scenario, we propose a diffusion-based video generation method featuring a dual-prediction diffusion module that jointly models RGB frames and optical flow to enhance motion representation. Additionally, we introduce a sparse visual encoder that enables controllable generation using only coarse-grained supervision—such as action class labels or instrument presence indicators—thereby eliminating reliance on dense spatiotemporal annotations. Evaluated across multiple public surgical datasets, our method effectively mitigates data imbalance: generated videos improve performance by 10–20% on average for action recognition, surgical tool detection, and endoscopic motion prediction tasks, significantly outperforming existing video generation and data augmentation baselines.

Technology Category

Application Category

📝 Abstract
Surgical video datasets are essential for scene understanding, enabling procedural modeling and intra-operative support. However, these datasets are often heavily imbalanced, with rare actions and tools under-represented, which limits the robustness of downstream models. We address this challenge with $SurgiFlowVid$, a sparse and controllable video diffusion framework for generating surgical videos of under-represented classes. Our approach introduces a dual-prediction diffusion module that jointly denoises RGB frames and optical flow, providing temporal inductive biases to improve motion modeling from limited samples. In addition, a sparse visual encoder conditions the generation process on lightweight signals (e.g., sparse segmentation masks or RGB frames), enabling controllability without dense annotations. We validate our approach on three surgical datasets across tasks including action recognition, tool presence detection, and laparoscope motion prediction. Synthetic data generated by our method yields consistent gains of 10-20% over competitive baselines, establishing $SurgiFlowVid$ as a promising strategy to mitigate data imbalance and advance surgical video understanding methods.
Problem

Research questions and friction points this paper is trying to address.

Addressing surgical video dataset imbalance for rare actions
Generating underrepresented surgical tool videos via diffusion model
Improving downstream task robustness with synthetic surgical data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-prediction diffusion jointly denoises RGB and optical flow
Sparse visual encoder enables controllable generation with lightweight signals
Generates surgical videos for underrepresented classes to mitigate imbalance
🔎 Similar Papers
No similar papers found.