Optimizing Federated Learning for Scalable Power-demand Forecasting in Microgrids

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address non-IID data bias and scalability limitations with thousands of clients in federated learning for microgrid and city-scale electricity demand forecasting, this paper proposes a lightweight federated optimization framework based on exponentially weighted loss. The method trains deep neural networks locally at edge devices and employs dynamic weighted aggregation to mitigate model bias induced by non-IID data, significantly reducing communication and computational overhead. Evaluated on the real-world OpenEIA dataset, it achieves 12.6–19.3% higher prediction accuracy than ARIMA and single-user DNN baselines, with 41% lower training time and support for over 1,000 heterogeneous clients. Experimental deployment on Raspberry Pi edge clusters and a pseudo-distributed environment demonstrates strong privacy preservation, high prediction accuracy, and exceptional scalability. This work establishes a practical, scalable paradigm for federated modeling in large-scale energy systems.

Technology Category

Application Category

📝 Abstract
Real-time monitoring of power consumption in cities and micro-grids through the Internet of Things (IoT) can help forecast future demand and optimize grid operations. But moving all consumer-level usage data to the cloud for predictions and analysis at fine time scales can expose activity patterns. Federated Learning~(FL) is a privacy-sensitive collaborative DNN training approach that retains data on edge devices, trains the models on private data locally, and aggregates the local models in the cloud. But key challenges exist: (i) clients can have non-independently identically distributed~(non-IID) data, and (ii) the learning should be computationally cheap while scaling to 1000s of (unseen) clients. In this paper, we develop and evaluate several optimizations to FL training across edge and cloud for time-series demand forecasting in micro-grids and city-scale utilities using DNNs to achieve a high prediction accuracy while minimizing the training cost. We showcase the benefit of using exponentially weighted loss while training and show that it further improves the prediction of the final model. Finally, we evaluate these strategies by validating over 1000s of clients for three states in the US from the OpenEIA corpus, and performing FL both in a pseudo-distributed setting and a Pi edge cluster. The results highlight the benefits of the proposed methods over baselines like ARIMA and DNNs trained for individual consumers, which are not scalable.
Problem

Research questions and friction points this paper is trying to address.

Optimizing federated learning for scalable power-demand forecasting
Addressing non-IID data and computational efficiency in FL
Improving prediction accuracy while minimizing training costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning for privacy-sensitive DNN training
Exponentially weighted loss improves prediction accuracy
Scalable optimization for non-IID data across edge-cloud
🔎 Similar Papers
No similar papers found.
Roopkatha Banerjee
Roopkatha Banerjee
PhD Student, Indian Institute of Science (IISc)
Distributed ComputingFederated LearningQuantum ComputingSystems for Machine LearningBlack Hole Astronomy
S
Sampath Koti
Department of Computational and Data Sciences, Indian Institute of Science (IISc), Bangalore, India
G
Gyanendra Singh
Department of Electrical Engineering, Indian Institute of Science (IISc), Bangalore, India
A
Anirban Chakraborty
Department of Computational and Data Sciences, Indian Institute of Science (IISc), Bangalore, India
G
Gurunath Gurrala
Department of Electrical Engineering, Indian Institute of Science (IISc), Bangalore, India
B
Bhushan Jagyasi
Accenture, India
Yogesh Simmhan
Yogesh Simmhan
Associate Professor, Indian Institute of Science
Distributed SystemsEdge AcceleratorsGraph AnalyticsCloud ComputingFederated Learning