🤖 AI Summary
Existing ECG multi-task models struggle to capture inter-abnormality correlations, while general-purpose large models lack ECG-specific pretraining and incur prohibitive computational costs under full-parameter fine-tuning. To address these limitations, we propose EnECG—a computationally efficient multi-task foundation model framework based on Mixture of Experts (MoE) and Low-Rank Adaptation (LoRA). EnECG integrates multiple pre-trained, ECG-specialized foundation models and introduces lightweight adapter modules alongside task-specific output heads, enabling cross-task knowledge fusion without updating backbone parameters. Experiments demonstrate that EnECG reduces fine-tuning parameters by over 95% (<5% of total parameters) and significantly lowers GPU memory consumption, while achieving an average performance gain of 2.1% across six clinical ECG tasks—including arrhythmia classification and ST-segment abnormality detection. The framework thus balances high accuracy, computational efficiency, and practical clinical deployability.
📝 Abstract
Electrocardiogram (ECG) analysis plays a vital role in the early detection, monitoring, and management of various cardiovascular conditions. While existing models have achieved notable success in ECG interpretation, they fail to leverage the interrelated nature of various cardiac abnormalities. Conversely, developing a specific model capable of extracting all relevant features for multiple ECG tasks remains a significant challenge. Large-scale foundation models, though powerful, are not typically pretrained on ECG data, making full re-training or fine-tuning computationally expensive. To address these challenges, we propose EnECG(Mixture of Experts-based Ensemble Learning for ECG Multi-tasks), an ensemble-based framework that integrates multiple specialized foundation models, each excelling in different aspects of ECG interpretation. Instead of relying on a single model or single task, EnECG leverages the strengths of multiple specialized models to tackle a variety of ECG-based tasks. To mitigate the high computational cost of full re-training or fine-tuning, we introduce a lightweight adaptation strategy: attaching dedicated output layers to each foundation model and applying Low-Rank Adaptation (LoRA) only to these newly added parameters. We then adopt a Mixture of Experts (MoE) mechanism to learn ensemble weights, effectively combining the complementary expertise of individual models. Our experimental results demonstrate that by minimizing the scope of fine-tuning, EnECG can help reduce computational and memory costs while maintaining the strong representational power of foundation models. This framework not only enhances feature extraction and predictive performance but also ensures practical efficiency for real-world clinical applications. The code is available at https://github.com/yuhaoxu99/EnECG.git.