UniSurgSAM: A Unified Promptable Model for Reliable Surgical Video Segmentation

📅 2026-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses critical limitations in existing surgical video segmentation methods, which are constrained by single-modality prompting and suffer from hallucinated predictions, mask drift, and the absence of failure recovery due to the tight coupling of target initialization and tracking. To overcome these issues, we propose UniSurgSAM—the first unified promptable model supporting multimodal inputs including visual, textual, and voice prompts—built upon a decoupled two-stage framework that separates initialization from tracking refinement. Key innovations include a presence-aware decoding mechanism to suppress hallucinations, boundary-aware long-term tracking to mitigate mask drift, and an adaptive state-transition strategy enabling closed-loop stage coordination and robust failure recovery. Evaluated on a new benchmark comprising four public surgical datasets, our method achieves state-of-the-art performance in real-time, cross-modal, and full-granularity segmentation, establishing a reliable foundation for computer-assisted surgery.
📝 Abstract
Surgical video segmentation is fundamental to computer-assisted surgery. In practice, surgeons need to dynamically specify targets throughout extended procedures, using heterogeneous cues such as visual selections, textual expressions, or audio instructions. However, existing Promptable Video Object Segmentation (PVOS) methods are typically restricted to a single prompt modality and rely on coupled frameworks that cause optimization interference between target initialization and tracking. Moreover, these methods produce hallucinated predictions when the target is absent and suffer from accumulated mask drift without failure recovery. To address these challenges, we present UniSurgSAM, a unified PVOS model enabling reliable surgical video segmentation through visual, textual, or audio prompts. Specifically, UniSurgSAM employs a decoupled two-stage framework that independently optimizes initialization and tracking to resolve the optimization interference. Within this framework, we introduce three key designs for reliability: presence-aware decoding that models target absence to suppress hallucinations; boundary-aware long-term tracking that prevents mask drift over extended sequences; and adaptive state transition that closes the loop between stages for failure recovery. Furthermore, we establish a multi-modal and multi-granular benchmark from four public surgical datasets with precise instance-level masklets. Extensive experiments demonstrate that UniSurgSAM achieves state-of-the-art performance in real time across all prompt modalities and granularities, providing a practical foundation for computer-assisted surgery. Code and datasets will be available at https://jinlab-imvr.github.io/UniSurgSAM.
Problem

Research questions and friction points this paper is trying to address.

Promptable Video Object Segmentation
Surgical Video Segmentation
Mask Drift
Hallucination
Multi-modal Prompting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Promptable Video Object Segmentation
Decoupled Framework
Presence-aware Decoding
Boundary-aware Tracking
Multi-modal Prompting