Surgical Video Understanding with Label Interpolation

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In robotic-assisted surgery (RAS) video understanding, a spatio-temporal supervision imbalance arises due to dense long-term annotations (e.g., surgical phases) contrasted with sparse short-term annotations (e.g., instrument segmentation, actions). To address this, we propose a flow-driven multi-task learning framework. Our method leverages optical flow estimation to temporally interpolate pixel-level segmentation labels from sparse keyframes, enabling reliable full-frame label propagation. It jointly models surgical phase recognition, instrument semantic segmentation, and action detection. With only a few keyframe annotations, the framework significantly improves segmentation and action recognition accuracy, achieving state-of-the-art performance across multiple RAS benchmarks while enhancing model generalizability and training efficiency. The core innovation lies in deeply integrating optical-flow-guided label interpolation into the multi-task architecture, effectively alleviating the spatio-temporal annotation imbalance bottleneck.

Technology Category

Application Category

📝 Abstract
Robot-assisted surgery (RAS) has become a critical paradigm in modern surgery, promoting patient recovery and reducing the burden on surgeons through minimally invasive approaches. To fully realize its potential, however, a precise understanding of the visual data generated during surgical procedures is essential. Previous studies have predominantly focused on single-task approaches, but real surgical scenes involve complex temporal dynamics and diverse instrument interactions that limit comprehensive understanding. Moreover, the effective application of multi-task learning (MTL) requires sufficient pixel-level segmentation data, which are difficult to obtain due to the high cost and expertise required for annotation. In particular, long-term annotations such as phases and steps are available for every frame, whereas short-term annotations such as surgical instrument segmentation and action detection are provided only for key frames, resulting in a significant temporal-spatial imbalance. To address these challenges, we propose a novel framework that combines optical flow-based segmentation label interpolation with multi-task learning. optical flow estimated from annotated key frames is used to propagate labels to adjacent unlabeled frames, thereby enriching sparse spatial supervision and balancing temporal and spatial information for training. This integration improves both the accuracy and efficiency of surgical scene understanding and, in turn, enhances the utility of RAS.
Problem

Research questions and friction points this paper is trying to address.

Surgical video understanding faces complex temporal dynamics and instrument interactions
Multi-task learning requires pixel-level segmentation data that is scarce and costly
Temporal-spatial imbalance exists between long-term and short-term surgical annotations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optical flow propagates labels to adjacent frames
Multi-task learning balances temporal and spatial information
Label interpolation enriches sparse spatial supervision
🔎 Similar Papers
No similar papers found.
G
Garam Kim
Korea Institute of Science and Technology, Seoul, South Korea
T
Tae Kyeong Jeong
Korea Institute of Science and Technology, Seoul, South Korea
Juyoun Park
Juyoun Park
Senior Researcher at Korea Institute of Science and Technology (KIST)
Machine LearningArtificial IntelligenceRobotics