🤖 AI Summary
This work addresses the limitation of existing foundation models for surgical videos, which overly focus on low-level visual artifacts such as smoke and specular reflections while failing to capture high-level semantic structures. To overcome this, the authors propose a native video foundation model tailored for surgical videos, shifting the learning objective from pixel-level reconstruction to latent motion prediction. Built upon the V-JEPA architecture, the method introduces three key innovations: motion-guided latent prediction, spatiotemporal affinity-based self-distillation, and feature diversity regularization. The model is pretrained on UniSurg-15M, a large-scale surgical video dataset, and demonstrates state-of-the-art performance across 17 benchmark tasks, including surgical phase recognition, action triplet understanding, skill assessment, polyp segmentation, and depth estimation.
📝 Abstract
While foundation models have advanced surgical video analysis, current approaches rely predominantly on pixel-level reconstruction objectives that waste model capacity on low-level visual details - such as smoke, specular reflections, and fluid motion - rather than semantic structures essential for surgical understanding. We present UniSurg, a video-native foundation model that shifts the learning paradigm from pixel-level reconstruction to latent motion prediction. Built on the Video Joint Embedding Predictive Architecture (V-JEPA), UniSurg introduces three key technical innovations tailored to surgical videos: 1) motion-guided latent prediction to prioritize semantically meaningful regions, 2) spatiotemporal affinity self-distillation to enforce relational consistency, and 3) feature diversity regularization to prevent representation collapse in texture-sparse surgical scenes. To enable large-scale pretraining, we curate UniSurg-15M, the largest surgical video dataset to date, comprising 3,658 hours of video from 50 sources across 13 anatomical regions. Extensive experiments across 17 benchmarks demonstrate that UniSurg significantly outperforms state-of-the-art methods on surgical workflow recognition (+14.6% F1 on EgoSurgery, +10.3% on PitVis), action triplet recognition (39.54% mAP-IVT on CholecT50), skill assessment, polyp segmentation, and depth estimation. These results establish UniSurg as a new standard for universal, motion-oriented surgical video understanding.