🤖 AI Summary
Federated learning (FL) incurs substantial carbon emissions due to distributed training across geographically dispersed clients, especially when powered by carbon-intensive electricity grids.
Method: This paper proposes a carbon-aware client selection and training scheduling framework integrating elastic time windows, α-fair carbon quota allocation, and a global fine-tuning phase. It jointly models spatiotemporal heterogeneity of grid carbon intensity, client-specific delayed training mechanisms, and bias-compensated convergence optimization.
Contribution/Results: To the best of our knowledge, this is the first work to co-model dynamic carbon intensity forecasting, elastic scheduling, and fair carbon resource allocation—balancing system efficiency and environmental equity. Experiments on real-world carbon intensity data demonstrate that, under diverse carbon budget constraints, our approach reduces carbon footprint by up to 38.2% while improving model accuracy by 1.7–4.3% over time-rigid baselines; gains are particularly pronounced under stringent carbon budgets.
📝 Abstract
Training large-scale machine learning models incurs substantial carbon emissions. Federated Learning (FL), by distributing computation across geographically dispersed clients, offers a natural framework to leverage regional and temporal variations in Carbon Intensity (CI). This paper investigates how to reduce emissions in FL through carbon-aware client selection and training scheduling. We first quantify the emission savings of a carbon-aware scheduling policy that leverages slack time -- permitting a modest extension of the training duration so that clients can defer local training rounds to lower-carbon periods. We then examine the performance trade-offs of such scheduling which stem from statistical heterogeneity among clients, selection bias in participation, and temporal correlation in model updates. To leverage these trade-offs, we construct a carbon-aware scheduler that integrates slack time, $α$-fair carbon allocation, and a global fine-tuning phase. Experiments on real-world CI data show that our scheduler outperforms slack-agnostic baselines, achieving higher model accuracy across a wide range of carbon budgets, with especially strong gains under tight carbon constraints.