SHEFL: Resource-Aware Aggregation and Sparsification in Heterogeneous Ensemble Federated Learning

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address slow convergence, poor fairness, and the trade-off between communication and computational efficiency caused by data and system heterogeneity in federated learning, this paper proposes a resource-aware global ensemble federated learning framework. The method innovatively integrates dynamic sparsification, resource-proportional weighted aggregation, and bias-aware ensemble strategies: client-specific model complexity and aggregation weights are dynamically allocated based on local computational capacity to achieve balanced workload distribution; diversity-enhanced deep ensemble mechanisms improve model robustness; and model pruning principles are leveraged to reduce communication overhead. Experiments demonstrate that the proposed approach significantly accelerates convergence under heterogeneous device settings, while simultaneously improving overall accuracy and fairness—particularly for resource-constrained clients. It outperforms state-of-the-art baselines across multiple metrics, achieving superior end-to-end efficiency and equity.

Technology Category

Application Category

📝 Abstract
Federated learning enables distributed training with private data of clients, but its convergence is hindered by data and system heterogeneity in realistic communication scenarios. Most existing system heterogeneous FL schemes utilize global pruning or ensemble distillation, yet they often overlook typical constraints required for communication efficiency. Meanwhile, deep ensembles can aggregate predictions from individually trained models to improve performance, but current ensemble-based FL methods fall short in fully capturing the diversity of model predictions. In this work, we propose SHEFL, a global ensemble-based federated learning framework suited for clients with diverse computational capacities. We allocate different numbers of global models to clients based on their available resources. We further introduce a novel aggregation scheme that accounts for bias between clients with different computational capabilities. To reduce the computational burden of training deep ensembles and mitigate data bias, we dynamically adjust the resource ratio across clients - aggressively reducing the influence of underpowered clients in constrained scenarios, while increasing their weight in the opposite case. Extensive experiments demonstrate that our method effectively addresses computational heterogeneity, significantly improving both fairness and overall performance compared to existing approaches.
Problem

Research questions and friction points this paper is trying to address.

Addresses convergence issues in federated learning due to data and system heterogeneity
Improves communication efficiency and model diversity in ensemble-based FL
Reduces computational burden and mitigates data bias in heterogeneous client scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Resource-aware global model allocation for clients
Bias-aware aggregation for computational diversity
Dynamic resource ratio adjustment for fairness
🔎 Similar Papers
No similar papers found.