🤖 AI Summary
Deep learning models for surgical phase recognition suffer from limited interpretability, hindering clinical trust and model debugging. To address this, we propose SurgX—a novel framework that establishes the first neuron-to-semantic-concept mapping mechanism for surgical videos. SurgX leverages Concept Activation Vectors (CAVs), representative sequence selection, concept set construction, and neuron importance analysis to semantically interpret the functional roles of critical neurons. It further integrates visualization and attribution techniques to enhance decision transparency. Evaluated on two state-of-the-art surgical phase recognition models, SurgX accurately identifies neurons strongly correlated with specific surgical phases, significantly improving model interpretability and credibility. The framework provides reliable, human-understandable explanations to support intraoperative monitoring, surgical skill assessment, and procedural optimization.
📝 Abstract
Surgical phase recognition plays a crucial role in surgical workflow analysis, enabling various applications such as surgical monitoring, skill assessment, and workflow optimization. Despite significant advancements in deep learning-based surgical phase recognition, these models remain inherently opaque, making it difficult to understand how they make decisions. This lack of interpretability hinders trust and makes it challenging to debug the model. To address this challenge, we propose SurgX, a novel concept-based explanation framework that enhances the interpretability of surgical phase recognition models by associating neurons with relevant concepts. In this paper, we introduce the process of selecting representative example sequences for neurons, constructing a concept set tailored to the surgical video dataset, associating neurons with concepts and identifying neurons crucial for predictions. Through extensive experiments on two surgical phase recognition models, we validate our method and analyze the explanation for prediction. This highlights the potential of our method in explaining surgical phase recognition. The code is available at https://github.com/ailab-kyunghee/SurgX