🤖 AI Summary
To address two key challenges in federated learning—degraded model accuracy due to non-IID data distributions and slow convergence caused by resource-constrained edge devices—this paper proposes FedDHAD, a novel framework integrating edge-aware design. Methodologically, it introduces (i) a Dynamic Heterogeneous Aggregation mechanism (FedDH), which adaptively assigns client aggregation weights based on local data non-IIDness; and (ii) a neuron-level Adaptive Dropout mechanism (FedAD), enabling lightweight, on-the-fly pruning of redundant neurons during training to accelerate convergence and mitigate overfitting. Crucially, both components are designed to operate under strict edge constraints without incurring additional communication overhead. Extensive experiments on standard non-IID benchmarks demonstrate that FedDHAD achieves up to 6.7% higher accuracy, 2.02× faster training, and 15.0% lower computational cost compared to state-of-the-art methods.
📝 Abstract
Federated Learning (FL) is a promising distributed machine learning approach that enables collaborative training of a global model using multiple edge devices. The data distributed among the edge devices is highly heterogeneous. Thus, FL faces the challenge of data distribution and heterogeneity, where non-Independent and Identically Distributed (non-IID) data across edge devices may yield in significant accuracy drop. Furthermore, the limited computation and communication capabilities of edge devices increase the likelihood of stragglers, thus leading to slow model convergence. In this paper, we propose the FedDHAD FL framework, which comes with two novel methods: Dynamic Heterogeneous model aggregation (FedDH) and Adaptive Dropout (FedAD). FedDH dynamically adjusts the weights of each local model within the model aggregation process based on the non-IID degree of heterogeneous data to deal with the statistical data heterogeneity. FedAD performs neuron-adaptive operations in response to heterogeneous devices to improve accuracy while achieving superb efficiency. The combination of these two methods makes FedDHAD significantly outperform state-of-the-art solutions in terms of accuracy (up to 6.7% higher), efficiency (up to 2.02 times faster), and computation cost (up to 15.0% smaller).