Explaining Digital Pathology Models via Clustering Activations

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In digital pathology, the limited interpretability of CNNs hinders their clinical adoption. To address this, we propose a global interpretability method based on unsupervised clustering of convolutional activation features. Unlike conventional saliency maps—which provide only local, pixel-level explanations—our approach structurally groups deep-layer activation patterns to visualize and semantically decode the model’s holistic decision logic. Evaluated on prostate cancer detection, the method identifies discriminative activation clusters strongly associated with clinically relevant histopathological patterns (e.g., glandular architectural abnormalities), thereby enhancing decision transparency and clinical trustworthiness. This work introduces, for the first time, systematic activation clustering as a principled framework for interpreting digital pathology CNNs, establishing a novel paradigm for the trustworthy deployment of high-stakes medical AI systems.

Technology Category

Application Category

📝 Abstract
We present a clustering-based explainability technique for digital pathology models based on convolutional neural networks. Unlike commonly used methods based on saliency maps, such as occlusion, GradCAM, or relevance propagation, which highlight regions that contribute the most to the prediction for a single slide, our method shows the global behaviour of the model under consideration, while also providing more fine-grained information. The result clusters can be visualised not only to understand the model, but also to increase confidence in its operation, leading to faster adoption in clinical practice. We also evaluate the performance of our technique on an existing model for detecting prostate cancer, demonstrating its usefulness.
Problem

Research questions and friction points this paper is trying to address.

Developing clustering-based explainability for digital pathology neural networks
Providing global model behavior insights beyond single-slide saliency methods
Enhancing clinical adoption confidence through interpretable prostate cancer detection clusters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Clustering activations to explain digital pathology models
Revealing global model behavior beyond single-slide saliency
Visualizing clusters to boost clinical adoption confidence
🔎 Similar Papers
No similar papers found.
A
Adam Bajger
Faculty of Informatics, Masaryk University, Brno, Czech Republic
Jan Obdržálek
Jan Obdržálek
Faculty of Informatics, Masaryk University, Brno, Czech Republic
Vojtěch Kůr
Vojtěch Kůr
Undegraduate student researcher
R
Rudolf Nenutil
Masaryk Memorial Cancer Institute, Brno, Czech Republic
P
Petr Holub
Institute of Computer Science, Masaryk University, Brno, Czech Republic
Vít Musil
Vít Musil
Assistant Professor, Masaryk University, Brno, Czechia
functional analysismachine learningoptimization
T
Tomáš Brázdil
Faculty of Informatics, Masaryk University, Brno, Czech Republic