🤖 AI Summary
This work addresses the challenge of zero-shot cross-procedure and cross-center surgical phase recognition. We propose a general video-language modeling framework supervised solely by natural language, eliminating reliance on surgical video annotations. Our method constructs a three-level video-text alignment hierarchy—span-level actions, phase-level summaries, and video-level abstractions—and introduces a fine-to-coarse contrastive learning scheme. Crucially, we design a hierarchical, disentangled multi-granularity cross-modal embedding space that jointly models short-term procedural actions and long-term surgical concepts. The resulting model supports zero-shot prompt-based inference without fine-tuning. It achieves state-of-the-art performance across multiple public surgical datasets and, more importantly, generalizes directly to unseen procedures and heterogeneous clinical centers—demonstrating, for the first time, truly generalizable semantic understanding of surgical workflows.
📝 Abstract
Natural language could play an important role in developing generalist surgical models by providing a broad source of supervision from raw texts. This flexible form of supervision can enable the model's transferability across datasets and tasks as natural language can be used to reference learned visual concepts or describe new ones. In this work, we present HecVL, a novel hierarchical video-language pretraining approach for building a generalist surgical model. Specifically, we construct a hierarchical video-text paired dataset by pairing the surgical lecture video with three hierarchical levels of texts: at clip-level, atomic actions using transcribed audio texts; at phase-level, conceptual text summaries; and at video-level, overall abstract text of the surgical procedure. Then, we propose a novel fine-to-coarse contrastive learning framework that learns separate embedding spaces for the three video-text hierarchies using a single model. By disentangling embedding spaces of different hierarchical levels, the learned multi-modal representations encode short-term and long-term surgical concepts in the same model. Thanks to the injected textual semantics, we demonstrate that the HecVL approach can enable zero-shot surgical phase recognition without any human annotation. Furthermore, we show that the same HecVL model for surgical phase recognition can be transferred across different surgical procedures and medical centers. The code is available at https://github.com/CAMMA-public/SurgVLP