🤖 AI Summary
To address the challenge of jointly optimizing energy efficiency, fairness, and model accuracy in wireless edge federated learning, this paper proposes FairEnergy—a holistic framework that jointly optimizes device selection, bandwidth allocation, and gradient compression ratios. It explicitly incorporates fairness into an energy-minimization objective via a novel contribution scoring mechanism based on local update magnitude and compression ratio. To tackle the resulting mixed-integer non-convex optimization problem, FairEnergy employs binary variable relaxation and Lagrangian decomposition to decouple bandwidth-coupled constraints, followed by a per-device subproblem solving strategy. Extensive experiments under Non-IID data distributions demonstrate that FairEnergy reduces system energy consumption by up to 79% compared to baseline methods, while simultaneously improving model accuracy—thereby achieving superior trade-offs among energy efficiency, fairness, and learning performance.
📝 Abstract
Federated learning (FL) enables collaborative model training across distributed devices while preserving data privacy. However, balancing energy efficiency and fair participation while ensuring high model accuracy remains challenging in wireless edge systems due to heterogeneous resources, unequal client contributions, and limited communication capacity. To address these challenges, we propose FairEnergy, a fairness-aware energy minimization framework that integrates a contribution score capturing both the magnitude of updates and their compression ratio into the joint optimization of device selection, bandwidth allocation, and compression level. The resulting mixed-integer non-convex problem is solved by relaxing binary selection variables and applying Lagrangian decomposition to handle global bandwidth coupling, followed by per-device subproblem optimization. Experiments on non-IID data show that FairEnergy achieves higher accuracy while reducing energy consumption by up to 79% compared to baseline strategies.