Federated Learning within Global Energy Budget over Heterogeneous Edge Accelerators

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address device and data heterogeneity, global energy constraints, and multi-objective co-optimization in federated learning (FL) over edge heterogeneous accelerators, this paper proposes the first client selection framework incorporating explicit energy budget constraints. We formulate a novel bi-level integer linear programming (ILP) model that jointly optimizes model accuracy, training latency, and energy consumption. To quantify client contribution under non-IID data, we introduce an approximation of the Shapley value; further, we design an energy–time joint prediction model to enable dynamic, context-aware client selection. Evaluated under realistic edge configurations and non-IID data distributions, our method achieves a 15% improvement in model accuracy and a 48% reduction in training time compared to state-of-the-art approaches, while significantly enhancing the energy-efficiency–performance trade-off—thereby advancing sustainable AI deployment at the edge.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) enables collaborative model training across distributed clients while preserving data privacy. However, optimizing both energy efficiency and model accuracy remains a challenge, given device and data heterogeneity. Further, sustainable AI through a global energy budget for FL has not been explored. We propose a novel optimization problem for client selection in FL that maximizes the model accuracy within an overall energy limit and reduces training time. We solve this with a unique bi-level ILP formulation that leverages approximate Shapley values and energy-time prediction models to efficiently solve this. Our FedJoule framework achieves superior training accuracies compared to SOTA and simple baselines for diverse energy budgets, non-IID distributions, and realistic experiment configurations, performing 15% and 48% better on accuracy and time, respectively. The results highlight the effectiveness of our method in achieving a viable trade-off between energy usage and performance in FL environments.
Problem

Research questions and friction points this paper is trying to address.

Optimize energy efficiency and model accuracy in Federated Learning
Address device and data heterogeneity in FL training
Maximize FL model accuracy within global energy budget
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bi-level ILP formulation for client selection
Approximate Shapley values for optimization
Energy-time prediction models for efficiency
🔎 Similar Papers
No similar papers found.
Roopkatha Banerjee
Roopkatha Banerjee
PhD Student, Indian Institute of Science (IISc)
Distributed ComputingFederated LearningQuantum ComputingSystems for Machine LearningBlack Hole Astronomy
T
Tejus Chandrashekar
Department of Computational and Data Sciences (CDS), Indian Institute of Science (IISc), Bangalore 560012 India
A
Ananth Eswar
Department of Computational and Data Sciences (CDS), Indian Institute of Science (IISc), Bangalore 560012 India
Y
Yogesh L. Simmhan
Department of Computational and Data Sciences (CDS), Indian Institute of Science (IISc), Bangalore 560012 India