Soft-CAM: Making black box models self-explainable for high-stakes decisions

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Convolutional neural networks (CNNs) suffer from limited decision trustworthiness in high-stakes domains like healthcare due to their opaque, black-box nature. Method: This paper proposes an intrinsically interpretable CNN architecture that achieves native interpretability through structural redesign—eliminating reliance on post-hoc attribution methods. Specifically, it replaces global average pooling and fully connected layers with a convolutional class-evidence layer, enabling direct generation of physically meaningful, spatially faithful class activation maps (CAMs). Contribution/Results: The architecture maintains state-of-the-art classification accuracy across three medical imaging benchmarks while delivering superior qualitative and quantitative interpretability over gradient-based post-hoc methods (e.g., Grad-CAM). Experimental results demonstrate both feasibility and practical utility of self-explaining CNNs, marking the first validation of such intrinsic interpretability without compromising performance.

Technology Category

Application Category

📝 Abstract
Convolutional neural networks (CNNs) are widely used for high-stakes applications like medicine, often surpassing human performance. However, most explanation methods rely on post-hoc attribution, approximating the decision-making process of already trained black-box models. These methods are often sensitive, unreliable, and fail to reflect true model reasoning, limiting their trustworthiness in critical applications. In this work, we introduce SoftCAM, a straightforward yet effective approach that makes standard CNN architectures inherently interpretable. By removing the global average pooling layer and replacing the fully connected classification layer with a convolution-based class evidence layer, SoftCAM preserves spatial information and produces explicit class activation maps that form the basis of the model's predictions. Evaluated on three medical datasets, SoftCAM maintains classification performance while significantly improving both the qualitative and quantitative explanation compared to existing post-hoc methods. Our results demonstrate that CNNs can be inherently interpretable without compromising performance, advancing the development of self-explainable deep learning for high-stakes decision-making.
Problem

Research questions and friction points this paper is trying to address.

Making CNNs inherently interpretable for high-stakes decisions
Replacing post-hoc methods with self-explainable model predictions
Preserving spatial information for trustworthy medical applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Replaces global pooling with spatial preservation
Uses convolution-based class evidence layer
Produces explicit class activation maps
🔎 Similar Papers
No similar papers found.
K
Kerol Djoumessi
Hertie Institute for AI in Brain Health University of Tübingen, Germany; Tübingen AI Center, University of Tübingen, Germany
Philipp Berens
Philipp Berens
Hertie Institute for AI in Brain Health, University of Tübingen
Computational NeuroscienceData ScienceMachine LearningDigital MedicineMedical AI