M$^2$OE$^2$-GL: A Family of Probabilistic Load Forecasters That Scales to Massive Customers

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Probabilistic forecasting for tens of thousands of loads in large-scale distribution feeders remains challenging, as conventional methods struggle to simultaneously achieve model personalization—capturing heterogeneity across user types, geographical locations, and phases—and scalable deployment. Method: We propose M2OE2-GL, a framework that first pretrains a unified global probabilistic forecasting model, then generates lightweight, group-specific predictors via efficient fine-tuning—enabling “global knowledge sharing + local characteristic adaptation.” It integrates deep temporal modeling, probabilistic output design, transfer learning, and model compression for high-throughput distributed inference. Contribution/Results: Evaluated on real-world power grid data, M2OE2-GL reduces quantile loss by 18.7% on average over baselines and achieves inference throughput exceeding 100,000 nodes per second. To our knowledge, it is the first approach to jointly deliver high accuracy and engineering deployability at the scale of hundreds of thousands of loads.

Technology Category

Application Category

📝 Abstract
Probabilistic load forecasting is widely studied and underpins power system planning, operation, and risk-aware decision making. Deep learning forecasters have shown strong ability to capture complex temporal and contextual patterns, achieving substantial accuracy gains. However, at the scale of thousands or even hundreds of thousands of loads in large distribution feeders, a deployment dilemma emerges: training and maintaining one model per customer is computationally and storage intensive, while using a single global model ignores distributional shifts across customer types, locations, and phases. Prior work typically focuses on single-load forecasters, global models across multiple loads, or adaptive/personalized models for relatively small settings, and rarely addresses the combined challenges of heterogeneity and scalability in large feeders. We propose M2OE2-GL, a global-to-local extension of the M2OE2 probabilistic forecaster. We first pretrain a single global M2OE2 base model across all feeder loads, then apply lightweight fine-tuning to derive a compact family of group-specific forecasters. Evaluated on realistic utility data, M2OE2-GL yields substantial error reductions while remaining scalable to very large numbers of loads.
Problem

Research questions and friction points this paper is trying to address.

Addressing computational inefficiency in training individual models for massive customers
Overcoming distributional shifts across diverse customer types and locations
Resolving scalability challenges while maintaining forecasting accuracy in large feeders
Innovation

Methods, ideas, or system contributions that make the work stand out.

Global base model pretrained across all loads
Lightweight fine-tuning for group-specific forecasters
Scalable probabilistic forecasting for massive customers
🔎 Similar Papers
No similar papers found.
H
Haoran Li
Department of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, USA
Z
Zhe Cheng
Department of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, USA
M
Muhao Guo
Department of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, USA
Yang Weng
Yang Weng
Associate Professor, School of Electrical, Computer, and Energy Eng., Arizona State University
Machine Learning for Power Systems
Y
Yannan Sun
Oncor Electric Delivery, Dallas, TX
V
Victor Tran
Oncor Electric Delivery, Dallas, TX
J
John Chainaranont
Oncor Electric Delivery, Dallas, TX