🤖 AI Summary
Dense video captioning (DVC) for surgical videos faces critical challenges due to the scarcity of densely annotated data, heterogeneity across procedures, and the absence of standardized, large-scale DVC benchmarks in surgery.
Method: We propose the first surgical-domain-specific DVC framework, built upon a video–language joint representation model. Our approach integrates education-video-driven multimodal pretraining—encompassing cross-modal alignment, temporal denoising, and caption generation—with a lightweight temporal modeling network and a parameter-efficient fine-tuning strategy leveraging language-domain annotation projection and LoRA, enabling zero-shot and few-shot phase segmentation.
Results: Experiments demonstrate a 7% improvement in phase segmentation accuracy, an 8% gain in zero-shot performance, and few-shot results on par with fully supervised baselines. Notably, our framework achieves the first end-to-end dense captioning for surgical videos, establishing a new paradigm for low-resource medical video understanding.
📝 Abstract
Automated surgical workflow analysis is crucial for education, research, and clinical decision-making, but the lack of annotated datasets hinders the development of accurate and comprehensive workflow analysis solutions. We introduce a novel approach for addressing the sparsity and heterogeneity of annotated training data inspired by the human learning procedure of watching experts and understanding their explanations. Our method leverages a video-language model trained on alignment, denoising, and generative tasks to learn short-term spatio-temporal and multimodal representations. A task-specific temporal model is then used to capture relationships across entire videos. To achieve comprehensive video-language understanding in the surgical domain, we introduce a data collection and filtering strategy to construct a large-scale pretraining dataset from educational YouTube videos. We then utilize parameter-efficient fine-tuning by projecting downstream task annotations from publicly available surgical datasets into the language domain. Extensive experiments in two surgical domains demonstrate the effectiveness of our approach, with performance improvements of up to 7% in phase segmentation tasks, 8% in zero-shot phase segmentation, and comparable capabilities to fully-supervised models in few-shot settings. Harnessing our model's capabilities for long-range temporal localization and text generation, we present the first comprehensive solution for dense video captioning (DVC) of surgical videos, addressing this task despite the absence of existing DVC datasets in the surgical domain. We introduce a novel approach to surgical workflow understanding that leverages video-language pretraining, large-scale video pretraining, and optimized fine-tuning. Our method improves performance over state-of-the-art techniques and enables new downstream tasks for surgical video understanding.