EnECG: Efficient Ensemble Learning for Electrocardiogram Multi-task Foundation Model

📅 2025-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing ECG multi-task models struggle to capture inter-abnormality correlations, while general-purpose large models lack ECG-specific pretraining and incur prohibitive computational costs under full-parameter fine-tuning. To address these limitations, we propose EnECG—a computationally efficient multi-task foundation model framework based on Mixture of Experts (MoE) and Low-Rank Adaptation (LoRA). EnECG integrates multiple pre-trained, ECG-specialized foundation models and introduces lightweight adapter modules alongside task-specific output heads, enabling cross-task knowledge fusion without updating backbone parameters. Experiments demonstrate that EnECG reduces fine-tuning parameters by over 95% (<5% of total parameters) and significantly lowers GPU memory consumption, while achieving an average performance gain of 2.1% across six clinical ECG tasks—including arrhythmia classification and ST-segment abnormality detection. The framework thus balances high accuracy, computational efficiency, and practical clinical deployability.

Technology Category

Application Category

📝 Abstract
Electrocardiogram (ECG) analysis plays a vital role in the early detection, monitoring, and management of various cardiovascular conditions. While existing models have achieved notable success in ECG interpretation, they fail to leverage the interrelated nature of various cardiac abnormalities. Conversely, developing a specific model capable of extracting all relevant features for multiple ECG tasks remains a significant challenge. Large-scale foundation models, though powerful, are not typically pretrained on ECG data, making full re-training or fine-tuning computationally expensive. To address these challenges, we propose EnECG(Mixture of Experts-based Ensemble Learning for ECG Multi-tasks), an ensemble-based framework that integrates multiple specialized foundation models, each excelling in different aspects of ECG interpretation. Instead of relying on a single model or single task, EnECG leverages the strengths of multiple specialized models to tackle a variety of ECG-based tasks. To mitigate the high computational cost of full re-training or fine-tuning, we introduce a lightweight adaptation strategy: attaching dedicated output layers to each foundation model and applying Low-Rank Adaptation (LoRA) only to these newly added parameters. We then adopt a Mixture of Experts (MoE) mechanism to learn ensemble weights, effectively combining the complementary expertise of individual models. Our experimental results demonstrate that by minimizing the scope of fine-tuning, EnECG can help reduce computational and memory costs while maintaining the strong representational power of foundation models. This framework not only enhances feature extraction and predictive performance but also ensures practical efficiency for real-world clinical applications. The code is available at https://github.com/yuhaoxu99/EnECG.git.
Problem

Research questions and friction points this paper is trying to address.

Develops a multi-task ECG model leveraging ensemble learning of specialized foundation models.
Addresses computational cost by using lightweight adaptation and LoRA instead of full fine-tuning.
Enhances feature extraction and prediction for various cardiac abnormalities with practical clinical efficiency.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensemble learning integrates multiple specialized foundation models
Lightweight adaptation uses LoRA on newly added output layers
Mixture of Experts mechanism learns ensemble weights for combination
Y
Yuhao Xu
Department of Computer Science, Emory University, Atlanta, GA, USA
X
Xiaoda Wang
Department of Computer Science, Emory University, Atlanta, GA, USA
Jiaying Lu
Jiaying Lu
Research Assistant Professor of School of Nursing's Center for Data Science, at Emory University
AI for HealthcareKnowledge GraphMultimodal LearningLarge Language Model
S
Sirui Ding
Bakar Computational Health Sciences Institute, UCSF, San Francisco, CA, USA
Defu Cao
Defu Cao
Peking University; MBZUAI; University of Southern California; Caltech
Time SeriesFoundation ModelMachine LearningCausal InferenceLLM
Huaxiu Yao
Huaxiu Yao
Assistant Professor of Computer Science and Data Science, UNC Chapel Hill
Machine LearningFoundation ModelsAI AlignmentAI AgentRobot Learning
Y
Yan Liu
Department of Computer Science, USC, Los Angeles, CA, USA
X
Xiao Hu
Center for Data Science, School of Nursing, Emory University, Atlanta, GA, USA
Carl Yang
Carl Yang
Waymo LLC, PhD at University of California, Davis
GPU ComputingParallel ComputingGraph Processing