A Knowledge Distillation-Based Approach to Enhance Transparency of Classifier Models

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of limited interpretability and high computational cost in AI models for medical image analysis, this paper proposes a lightweight, interpretable framework that synergistically integrates knowledge distillation with hierarchical feature visualization. A compact student network is trained under the guidance of a CNN-based teacher model, while its multi-level feature maps are simultaneously leveraged to generate intuitive, layered decision explanations. This work pioneers the deep integration of knowledge distillation and interpretability modeling—achieving model compression (reduced depth) without sacrificing clinical transparency. Evaluated on three public benchmarks—brain tumor, retinal disease, and Alzheimer’s disease datasets—the method maintains high accuracy (mean accuracy ≥92.5%), reduces explanation generation time by 47%, and effectively unifies model efficiency with diagnostic interpretability.

Technology Category

Application Category

📝 Abstract
With the rapid development of artificial intelligence (AI), especially in the medical field, the need for its explainability has grown. In medical image analysis, a high degree of transparency and model interpretability can help clinicians better understand and trust the decision-making process of AI models. In this study, we propose a Knowledge Distillation (KD)-based approach that aims to enhance the transparency of the AI model in medical image analysis. The initial step is to use traditional CNN to obtain a teacher model and then use KD to simplify the CNN architecture, retain most of the features of the data set, and reduce the number of network layers. It also uses the feature map of the student model to perform hierarchical analysis to identify key features and decision-making processes. This leads to intuitive visual explanations. We selected three public medical data sets (brain tumor, eye disease, and Alzheimer's disease) to test our method. It shows that even when the number of layers is reduced, our model provides a remarkable result in the test set and reduces the time required for the interpretability analysis.
Problem

Research questions and friction points this paper is trying to address.

Enhance AI model transparency in medical image analysis
Simplify CNN architecture using Knowledge Distillation
Provide intuitive visual explanations for decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge Distillation simplifies CNN architecture
Feature map enables hierarchical analysis
Model reduces interpretability analysis time
🔎 Similar Papers
No similar papers found.