🤖 AI Summary
This work addresses the performance degradation of global models in federated learning caused by client data heterogeneity. To mitigate this issue, the authors propose Terraform, a novel approach that uniquely integrates gradient information with a deterministic client selection mechanism. By leveraging gradient updates to characterize client heterogeneity, Terraform enables precise participant selection, thereby enhancing both model accuracy and training efficiency. The method introduces a gradient-based heterogeneity metric coupled with a deterministic selection algorithm, yielding significant performance gains within the standard federated learning framework. Experimental results demonstrate that Terraform improves accuracy by up to 47% compared to existing methods, while ablation studies and training-time analyses further confirm its efficiency and robustness.
📝 Abstract
Federated Learning (FL) enables a distributed client-server architecture where multiple clients collaboratively train a global Machine Learning (ML) model without sharing sensitive local data. However, FL often results in lower accuracy than traditional ML algorithms due to statistical heterogeneity across clients. Prior works attempt to address this by using model updates, such as loss and bias, from client models to select participants that can improve the global model's accuracy. However, these updates neither accurately represent a client's heterogeneity nor are their selection methods deterministic. We mitigate these limitations by introducing Terraform, a novel client selection methodology that uses gradient updates and a deterministic selection algorithm to select heterogeneous clients for retraining. This bi-pronged approach allows Terraform to achieve up to 47 percent higher accuracy over prior works. We further demonstrate its efficiency through comprehensive ablation studies and training time analyses, providing strong justification for the robustness of Terraform.