🤖 AI Summary
Existing surgical vision models rely solely on unimodal visual inputs, require labor-intensive manual annotations, and are constrained by fixed category definitions—resulting in poor generalization. To address these limitations, we propose SurgVLP, the first self-supervised vision-language pretraining paradigm tailored to surgical scenarios. It leverages publicly available online surgical course videos and their ASR-generated transcripts, eliminating the need for human annotation. Methodologically, SurgVLP overcomes domain-specific language challenges—including surgical terminology recognition and highly specialized contextual understanding—via contrastive learning for cross-modal alignment and a surgically customized pretraining framework. Experiments demonstrate that SurgVLP significantly outperforms unimodal baselines across multiple downstream tasks; enables zero-shot transfer and few-shot adaptation; and generalizes effectively to unseen surgical procedures. We release our code and pretrained weights to advance data-efficient surgical AI.
📝 Abstract
Recent advancements in surgical computer vision applications have been driven by vision-only models, which do not explicitly integrate the rich semantics of language into their design. These methods rely on manually annotated surgical videos to predict a fixed set of object categories, limiting their generalizability to unseen surgical procedures and downstream tasks. In this work, we put forward the idea that the surgical video lectures available through open surgical e-learning platforms can provide effective vision and language supervisory signals for multi-modal representation learning without relying on manual annotations. We address the surgery-specific linguistic challenges present in surgical video lectures by employing multiple complementary automatic speech recognition systems to generate text transcriptions. We then present a novel method, SurgVLP - Surgical Vision Language Pre-training, for multi-modal representation learning. Extensive experiments across diverse surgical procedures and tasks demonstrate that the multi-modal representations learned by SurgVLP exhibit strong transferability and adaptability in surgical video analysis. Furthermore, our zero-shot evaluations highlight SurgVLP's potential as a general-purpose foundation model for surgical workflow analysis, reducing the reliance on extensive manual annotations for downstream tasks, and facilitating adaptation methods such as few-shot learning to build a scalable and data-efficient solution for various downstream surgical applications. The [training code](https://github.com/CAMMA-public/SurgVLP) and [weights](https://github.com/CAMMA-public/PeskaVLP) are public.