🤖 AI Summary
To address device and data heterogeneity, global energy constraints, and multi-objective co-optimization in federated learning (FL) over edge heterogeneous accelerators, this paper proposes the first client selection framework incorporating explicit energy budget constraints. We formulate a novel bi-level integer linear programming (ILP) model that jointly optimizes model accuracy, training latency, and energy consumption. To quantify client contribution under non-IID data, we introduce an approximation of the Shapley value; further, we design an energy–time joint prediction model to enable dynamic, context-aware client selection. Evaluated under realistic edge configurations and non-IID data distributions, our method achieves a 15% improvement in model accuracy and a 48% reduction in training time compared to state-of-the-art approaches, while significantly enhancing the energy-efficiency–performance trade-off—thereby advancing sustainable AI deployment at the edge.
📝 Abstract
Federated Learning (FL) enables collaborative model training across distributed clients while preserving data privacy. However, optimizing both energy efficiency and model accuracy remains a challenge, given device and data heterogeneity. Further, sustainable AI through a global energy budget for FL has not been explored. We propose a novel optimization problem for client selection in FL that maximizes the model accuracy within an overall energy limit and reduces training time. We solve this with a unique bi-level ILP formulation that leverages approximate Shapley values and energy-time prediction models to efficiently solve this. Our FedJoule framework achieves superior training accuracies compared to SOTA and simple baselines for diverse energy budgets, non-IID distributions, and realistic experiment configurations, performing 15% and 48% better on accuracy and time, respectively. The results highlight the effectiveness of our method in achieving a viable trade-off between energy usage and performance in FL environments.