Efficient Frame Extraction: A Novel Approach Through Frame Similarity and Surgical Tool Tracking for Video Segmentation

πŸ“… 2025-01-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the inefficiency and automation challenges in AI-based analysis of lengthy, high-volume surgical videos, this paper proposes Kinematic-Adaptive Keyframe Recognition (KAFR). KAFR innovatively integrates instrument kinematic similarity modeling with an adaptive frame sampling paradigm, overcoming limitations of conventional uniform sampling or unsupervised compression to precisely identify semantically relevant frames. Technically, it unifies YOLOv8-based instrument detection, motion position/velocity difference modeling, and the X3D spatiotemporal convolutional network for end-to-end segmentation and classification. Evaluated on two multi-year, multi-institutional clinical datasetsβ€”GJ (2017–2021) and PJ (2011–2022)β€”KAFR achieves a 10Γ— frame compression ratio while improving segmentation mIoU by 4.32 percentage points (from 0.749 to 0.7814), significantly enhancing both computational efficiency and analytical accuracy.

Technology Category

Application Category

πŸ“ Abstract
The interest in leveraging Artificial Intelligence (AI) for surgical procedures to automate analysis has witnessed a significant surge in recent years. One of the primary tools for recording surgical procedures and conducting subsequent analyses, such as performance assessment, is through videos. However, these operative videos tend to be notably lengthy compared to other fields, spanning from thirty minutes to several hours, which poses a challenge for AI models to effectively learn from them. Despite this challenge, the foreseeable increase in the volume of such videos in the near future necessitates the development and implementation of innovative techniques to tackle this issue effectively. In this article, we propose a novel technique called Kinematics Adaptive Frame Recognition (KAFR) that can efficiently eliminate redundant frames to reduce dataset size and computation time while retaining useful frames to improve accuracy. Specifically, we compute the similarity between consecutive frames by tracking the movement of surgical tools. Our approach follows these steps: i) Tracking phase: a YOLOv8 model is utilized to detect tools presented in the scene, ii) Similarity phase: Similarities between consecutive frames are computed by estimating variation in the spatial positions and velocities of the tools, iii) Classification phase: A X3D CNN is trained to classify segmentation. We evaluate the effectiveness of our approach by analyzing datasets obtained through retrospective reviews of cases at two referral centers. The Gastrojejunostomy (GJ) dataset covers procedures performed between 2017 to 2021, while the Pancreaticojejunostomy (PJ) dataset spans from 2011 to 2022 at the same centers. By adaptively selecting relevant frames, we achieve a tenfold reduction in the number of frames while improving accuracy by 4.32% (from 0.749 to 0.7814).
Problem

Research questions and friction points this paper is trying to address.

Surgical Video Analysis
Data Reduction
AI Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kinematic Adaptive Frame Recognition
Surgical Tool Dynamics
AI Efficiency Enhancement
πŸ”Ž Similar Papers
No similar papers found.
H
Huu Phong Nguyen
Department of Surgery, University of Texas Southwestern Medical Center, Texas, USA
S
Shekhar Madhav Khairnar
Department of Surgery, University of Texas Southwestern Medical Center, Texas, USA
S
Sofia Garces Palacios
Department of Surgery, University of Texas Southwestern Medical Center, Texas, USA
A
Amr Al-Abbas
Department of Surgery, University of Texas Southwestern Medical Center, Texas, USA
F
Francisco Antunes
University of Coimbra, Coimbra, Portugal
Bernardete Ribeiro
Bernardete Ribeiro
Professor at Dep of Informatics Engineering, CISUC /University of Coimbra
Machine LearningPattern RecognitionResponsible AIArtificial Intelligence
M
Melissa E. Hogg
NorthShore University HealthSystem, Evanston, IL, USA
A
A. Zureikat
University of Pittsburgh Medical Center, Pittsburgh, PA, USA
P
Patricio M. Polanco
Department of Surgery, University of Texas Southwestern Medical Center, Texas, USA
H
Herbert J. Zeh
Department of Surgery, University of Texas Southwestern Medical Center, Texas, USA
Ganesh Sankaranarayanan
Ganesh Sankaranarayanan
Associate Professor, Department of Surgery, The University of Texas Southwestern Medical Center
HapticsTeleroboticsTelesurgerySurgical SimulationDeep Learning