π€ AI Summary
To address the inefficiency and automation challenges in AI-based analysis of lengthy, high-volume surgical videos, this paper proposes Kinematic-Adaptive Keyframe Recognition (KAFR). KAFR innovatively integrates instrument kinematic similarity modeling with an adaptive frame sampling paradigm, overcoming limitations of conventional uniform sampling or unsupervised compression to precisely identify semantically relevant frames. Technically, it unifies YOLOv8-based instrument detection, motion position/velocity difference modeling, and the X3D spatiotemporal convolutional network for end-to-end segmentation and classification. Evaluated on two multi-year, multi-institutional clinical datasetsβGJ (2017β2021) and PJ (2011β2022)βKAFR achieves a 10Γ frame compression ratio while improving segmentation mIoU by 4.32 percentage points (from 0.749 to 0.7814), significantly enhancing both computational efficiency and analytical accuracy.
π Abstract
The interest in leveraging Artificial Intelligence (AI) for surgical procedures to automate analysis has witnessed a significant surge in recent years. One of the primary tools for recording surgical procedures and conducting subsequent analyses, such as performance assessment, is through videos. However, these operative videos tend to be notably lengthy compared to other fields, spanning from thirty minutes to several hours, which poses a challenge for AI models to effectively learn from them. Despite this challenge, the foreseeable increase in the volume of such videos in the near future necessitates the development and implementation of innovative techniques to tackle this issue effectively. In this article, we propose a novel technique called Kinematics Adaptive Frame Recognition (KAFR) that can efficiently eliminate redundant frames to reduce dataset size and computation time while retaining useful frames to improve accuracy. Specifically, we compute the similarity between consecutive frames by tracking the movement of surgical tools. Our approach follows these steps: i) Tracking phase: a YOLOv8 model is utilized to detect tools presented in the scene, ii) Similarity phase: Similarities between consecutive frames are computed by estimating variation in the spatial positions and velocities of the tools, iii) Classification phase: A X3D CNN is trained to classify segmentation. We evaluate the effectiveness of our approach by analyzing datasets obtained through retrospective reviews of cases at two referral centers. The Gastrojejunostomy (GJ) dataset covers procedures performed between 2017 to 2021, while the Pancreaticojejunostomy (PJ) dataset spans from 2011 to 2022 at the same centers. By adaptively selecting relevant frames, we achieve a tenfold reduction in the number of frames while improving accuracy by 4.32% (from 0.749 to 0.7814).