ECG-IMN: Interpretable Mesomorphic Neural Networks for 12-Lead Electrocardiogram Interpretation

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited clinical adoption of deep learning models in electrocardiogram (ECG) diagnosis due to their lack of interpretability. To this end, the authors propose ECG-IMN, an interpretable model based on a hypernetwork architecture that dynamically generates sample-specific linear weights via a deep convolutional backbone and produces high-resolution, instance-level feature attribution maps through a transitional decoder. By decoupling parameter generation from prediction, the method provides mathematically transparent and precise “white-box” decision rationales in an intrinsic manner. Evaluated on the PTB-XL dataset, ECG-IMN achieves diagnostic performance comparable to black-box models in terms of AUROC while generating stable, faithful explanations that accurately localize key pathological evidence, such as ST-segment elevation.

Technology Category

Application Category

📝 Abstract
Deep learning has achieved expert-level performance in automated electrocardiogram (ECG) diagnosis, yet the"black-box"nature of these models hinders their clinical deployment. Trust in medical AI requires not just high accuracy but also transparency regarding the specific physiological features driving predictions. Existing explainability methods for ECGs typically rely on post-hoc approximations (e.g., Grad-CAM and SHAP), which can be unstable, computationally expensive, and unfaithful to the model's actual decision-making process. In this work, we propose the ECG-IMN, an Interpretable Mesomorphic Neural Network tailored for high-resolution 12-lead ECG classification. Unlike standard classifiers, the ECG-IMN functions as a hypernetwork: a deep convolutional backbone generates the parameters of a strictly linear model specific to each input sample. This architecture enforces intrinsic interpretability, as the decision logic is mathematically transparent and the generated weights (W) serve as exact, high-resolution feature attribution maps. We introduce a transition decoder that effectively maps latent features to sample-wise weights, enabling precise localization of pathological evidence (e.g., ST-elevation, T-wave inversion) in both time and lead dimensions. We evaluate our approach on the PTB-XL dataset for classification tasks, demonstrating that the ECG-IMN achieves competitive predictive performance (AUROC comparable to black-box baselines) while providing faithful, instance-specific explanations. By explicitly decoupling parameter generation from prediction execution, our framework bridges the gap between deep learning capability and clinical trustworthiness, offering a principled path toward"white-box"cardiac diagnostics.
Problem

Research questions and friction points this paper is trying to address.

ECG interpretation
model interpretability
black-box models
clinical trust
explainable AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interpretable AI
Hypernetwork
ECG interpretation
Intrinsic interpretability
Feature attribution
🔎 Similar Papers
No similar papers found.