Multi-Stage Boundary-Aware Transformer Network for Action Segmentation in Untrimmed Surgical Videos

📅 2025-04-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inaccurate surgical action segmentation in untrimmed surgical videos—caused by ambiguous action boundaries, highly variable action durations, and subtle transitions—this paper proposes a multi-stage boundary-aware Transformer model. Our key contributions are: (1) a hierarchical sliding-window attention mechanism coupled with multi-stage feature fusion to enhance temporal modeling; (2) a context-aware voting strategy for fine-grained boundary localization, overcoming the limitations of conventional binary boundary detection; and (3) a unified classification-boundary joint loss function that enables collaborative optimization of action recognition and boundary regression. Evaluated on three public surgical datasets, our method achieves state-of-the-art performance in both F1@25% and F1@50% metrics, significantly mitigating over-segmentation and under-segmentation while simultaneously improving boundary localization accuracy and action classification performance.

Technology Category

Application Category

📝 Abstract
Understanding actions within surgical workflows is essential for evaluating post-operative outcomes. However, capturing long sequences of actions performed in surgical settings poses challenges, as individual surgeons have their unique approaches shaped by their expertise, leading to significant variability. To tackle this complex problem, we focused on segmentation with precise boundaries, a demanding task due to the inherent variability in action durations and the subtle transitions often observed in untrimmed videos. These transitions, marked by ambiguous starting and ending points, complicate the segmentation process. Traditional models, such as MS-TCN, which depend on large receptive fields, frequently face challenges of over-segmentation (resulting in fragmented segments) or under-segmentation (merging distinct actions). Both of these issues negatively impact the quality of segmentation. To overcome these challenges, we present the Multi-Stage Boundary-Aware Transformer Network (MSBATN) with hierarchical sliding window attention, designed to enhance action segmentation. Our proposed approach incorporates a novel unified loss function that treats action classification and boundary detection as distinct yet interdependent tasks. Unlike traditional binary boundary detection methods, our boundary voting mechanism accurately identifies start and end points by leveraging contextual information. Extensive experiments using three challenging surgical datasets demonstrate the superior performance of the proposed method, achieving state-of-the-art results in F1 scores at thresholds of 25% and 50%, while also delivering comparable performance in other metrics.
Problem

Research questions and friction points this paper is trying to address.

Segmenting actions in untrimmed surgical videos with precise boundaries
Addressing over-segmentation and under-segmentation in action recognition
Improving boundary detection in variable-duration surgical actions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Stage Boundary-Aware Transformer Network for segmentation
Hierarchical sliding window attention enhances action segmentation
Unified loss function integrates classification and boundary detection
🔎 Similar Papers
No similar papers found.
R
Rezowan Shuvo
Robert Gordon University, Garthdee House, Aberdeen, AB10 7AQ, United Kingdom
M
M S Mekala
Robert Gordon University, Garthdee House, Aberdeen, AB10 7AQ, United Kingdom
Eyad Elyan
Eyad Elyan
Eyad Elyan, Professor, School of Computing, Robert Gordon University
Machine visionMachine LearningDocument AnalysisCondition MonitoringData Science