Boosting Masked ECG-Text Auto-Encoders as Discriminative Learners

πŸ“… 2024-10-03
πŸ“ˆ Citations: 2
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenges of strong modality heterogeneity and scarce annotated data in cross-modal understanding of ECG signals and clinical text, this paper proposes D-BETAβ€”a novel contrastive representation learning framework integrating generative masked reconstruction with enhanced discriminative learning. It introduces a cross-modal alignment-guided improved negative sampling strategy and a dual-modality cooperative masking mechanism. By unifying contrastive masked autoencoding, multimodal masked modeling, and a customized loss function, D-BETA enables robust representation learning under few-shot and zero-shot settings. Evaluated via linear probing on five public datasets using only 1% labeled data, D-BETA achieves an average AUC improvement of 15% over state-of-the-art methods; under zero-shot evaluation, it yields a 2% AUC gain. These results demonstrate significant advances in cross-modal ECG–text representation learning with limited supervision.

Technology Category

Application Category

πŸ“ Abstract
The accurate interpretation of Electrocardiogram (ECG) signals is pivotal for diagnosing cardiovascular diseases. Integrating ECG signals with accompanying textual reports further holds immense potential to enhance clinical diagnostics by combining physiological data and qualitative insights. However, this integration faces significant challenges due to inherent modality disparities and the scarcity of labeled data for robust cross-modal learning. To address these obstacles, we propose D-BETA, a novel framework that pre-trains ECG and text data using a contrastive masked auto-encoder architecture. D-BETA uniquely combines the strengths of generative with boosted discriminative capabilities to achieve robust cross-modal representations. This is accomplished through masked modality modeling, specialized loss functions, and an improved negative sampling strategy tailored for cross-modal alignment. Extensive experiments on five public datasets across diverse downstream tasks demonstrate that D-BETA significantly outperforms existing methods, achieving an average AUC improvement of 15% in linear probing with only one percent of training data and 2% in zero-shot performance without requiring training data over state-of-the-art models. These results highlight the effectiveness of D-BETA, underscoring its potential to advance automated clinical diagnostics through multi-modal representations. Our sample code and checkpoint are made available at https://github.com/manhph2211/D-BETA.
Problem

Research questions and friction points this paper is trying to address.

Integrating ECG signals with textual reports for diagnostics
Overcoming modality disparities and scarce labeled data
Enhancing cross-modal learning with masked auto-encoder
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive masked auto-encoder for ECG-text pre-training
Boosted discriminative with generative capabilities
Improved negative sampling for cross-modal alignment
πŸ”Ž Similar Papers
No similar papers found.