🤖 AI Summary
Noise and artifacts in single-lead ECGs induce feature conflicts across multiple views, undermining the robustness of ventricular arrhythmia classification. Method: We propose an uncertainty-aware dual-view deep learning framework that jointly models 1D temporal features (via 1D CNN) and 2D image-space features (via 2D CNN). To address inter-view inconsistency, we introduce Monte Carlo Dropout-based uncertainty estimation into multi-view fusion for the first time, and design a noise-adaptive, conflict-aware gated weighting mechanism to enable trustworthy collaborative decision-making between morphological and spatiotemporal features. Results: Evaluated on two real-world ECG datasets, our method surpasses state-of-the-art approaches, achieving a 2.3% absolute improvement in classification accuracy and demonstrating significantly enhanced robustness against baseline wander and electromyographic noise.
📝 Abstract
We propose a deep neural architecture that performs uncertainty-aware multi-view classification of arrhythmia from ECG. Our method learns two different views (1D and 2D) of single-lead ECG to capture different types of information. We use a fusion technique to reduce the conflict between the different views caused by noise and artifacts in ECG data, thus incorporating uncertainty to obtain stronger final predictions. Our framework contains the following three modules (1) a time-series module to learn the morphological features from ECG; (2) an image-space learning module to learn the spatiotemporal features; and (3) the uncertainty-aware fusion module to fuse the information from the two different views. Experimental results on two real-world datasets demonstrate that our framework not only improves the performance on arrhythmia classification compared to the state-of-the-art but also shows better robustness to noise and artifacts present in ECG.