🤖 AI Summary
This study addresses key clinical prediction tasks in surgical data science—length of hospital stay, postoperative complications, and surgical phase recognition—by proposing a multimodal transfer learning framework built upon the Vision-Joint Embedding Predictive Architecture (V-JEPA). Methodologically, it employs V-JEPA as the visual backbone, integrated with a dedicated time-series encoder to jointly model heterogeneous intraoperative data (e.g., endoscopic video, physiological signals, instrument trajectories), and leverages domain adaptation and modular decision networks to enable unsupervised pretraining and supervised fine-tuning within a shared representation space. Its primary contribution is the first extension of the JEPA paradigm to multimodal surgical modeling, enabling synergistic utilization of unlabeled video and time-series data. Evaluated on a private hepatectomy dataset and the public HeiCo benchmark, the pretrained model matches the performance of the EndoVis 2017 challenge winner; after fine-tuning, it achieves statistically significant improvements across all downstream tasks, demonstrating strong generalizability and clinical applicability.
📝 Abstract
We investigate how both the adaptation of a generic foundation model via transfer learning and the integration of complementary modalities from the operating room (OR) can support surgical data science. To this end, we use V-JEPA as the single-modality foundation of a multimodal model for minimally invasive surgery support. We analyze how the model's downstream performance can benefit (a) from finetuning on unlabeled surgical video data and (b) from providing additional time-resolved data streams from the OR in a multimodal setup.
In an in-house dataset of liver surgery videos, we analyze the tasks of predicting hospital length of stay and postoperative complications. In videos of the public HeiCo dataset, we analyze the task of surgical phase recognition. As a baseline, we apply pretrained V-JEPA to all tasks. We then finetune it on unlabeled, held-out videos to investigate its change in performance after domain adaptation. Following the idea of modular decision support networks, we integrate additional data streams from the OR by training a separate encoder to form a shared representation space with V-JEPA's embeddings.
Our experiments show that finetuning on domain-specific data increases model performance. On the in-house data, integrating additional time-resolved data likewise benefits the model. On the HeiCo data, accuracy of the pretrained video-only, single-modality baseline setup is on par with the top-performing submissions of the EndoVis2017 challenge, while finetuning on domain-specific data increases accuracy further. Our results thus demonstrate how surgical data science can leverage public, generic foundation models. Likewise, they indicate the potential of domain adaptation and of integrating suitable complementary data streams from the OR. To support further research, we release our code and model weights at https://github.com/DigitalSurgeryLab-Basel/ML-CDS-2025.