MedicalPatchNet: A Patch-Based Self-Explainable AI Architecture for Chest X-ray Classification

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited interpretability of deep neural networks in chest X-ray classification—which hinders clinical trust—this paper proposes an intrinsically self-explanatory architecture. The method partitions input images into non-overlapping patches, independently encodes and classifies each patch using an EfficientNet-style backbone, and aggregates patch-level predictions. This design enables natural, post-hoc-free attribution of decisions to anatomical regions, mitigates shortcut learning, and enhances lesion localization. On CheXpert, the model achieves an AUROC of 0.907—comparable to EfficientNet-B0’s 0.908—while attaining a lesion localization hit rate of 0.485 on CheXlocalize, significantly surpassing the baseline (0.376). The core contribution is the unified realization of end-to-end interpretability and high performance, establishing a novel paradigm for clinically trustworthy AI.

Technology Category

Application Category

📝 Abstract
Deep neural networks excel in radiological image classification but frequently suffer from poor interpretability, limiting clinical acceptance. We present MedicalPatchNet, an inherently self-explainable architecture for chest X-ray classification that transparently attributes decisions to distinct image regions. MedicalPatchNet splits images into non-overlapping patches, independently classifies each patch, and aggregates predictions, enabling intuitive visualization of each patch's diagnostic contribution without post-hoc techniques. Trained on the CheXpert dataset (223,414 images), MedicalPatchNet matches the classification performance (AUROC 0.907 vs. 0.908) of EfficientNet-B0, while substantially improving interpretability: MedicalPatchNet demonstrates substantially improved interpretability with higher pathology localization accuracy (mean hit-rate 0.485 vs. 0.376 with Grad-CAM) on the CheXlocalize dataset. By providing explicit, reliable explanations accessible even to non-AI experts, MedicalPatchNet mitigates risks associated with shortcut learning, thus improving clinical trust. Our model is publicly available with reproducible training and inference scripts and contributes to safer, explainable AI-assisted diagnostics across medical imaging domains. We make the code publicly available: https://github.com/TruhnLab/MedicalPatchNet
Problem

Research questions and friction points this paper is trying to address.

Improving interpretability in chest X-ray classification
Providing transparent decision attributions to image regions
Mitigating shortcut learning risks for clinical trust
Innovation

Methods, ideas, or system contributions that make the work stand out.

Patch-based self-explainable AI architecture
Independent patch classification with aggregation
Visualization without post-hoc techniques
🔎 Similar Papers
No similar papers found.
P
Patrick Wienholt
Department of Diagnostic and Interventional Radiology of the University Hospital Aachen, Aachen, Germany
C
Christiane Kuhl
Department of Diagnostic and Interventional Radiology of the University Hospital Aachen, Aachen, Germany
J
Jakob Nikolas Kather
Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
Sven Nebelung
Sven Nebelung
Department of Diagnostic and Interventional Radiology, University Hospital Aachen
Advanced MRI TechniquesFunctionality AssessmentBiomechanical ImagingCartilageArtificial Intelligence
Daniel Truhn
Daniel Truhn
Professor of Radiology, University Hospital Aachen
Machine LearningArtificial IntelligenceComputer VisionMedical Imaging