🤖 AI Summary
Probabilistic forecasting for tens of thousands of loads in large-scale distribution feeders remains challenging, as conventional methods struggle to simultaneously achieve model personalization—capturing heterogeneity across user types, geographical locations, and phases—and scalable deployment. Method: We propose M2OE2-GL, a framework that first pretrains a unified global probabilistic forecasting model, then generates lightweight, group-specific predictors via efficient fine-tuning—enabling “global knowledge sharing + local characteristic adaptation.” It integrates deep temporal modeling, probabilistic output design, transfer learning, and model compression for high-throughput distributed inference. Contribution/Results: Evaluated on real-world power grid data, M2OE2-GL reduces quantile loss by 18.7% on average over baselines and achieves inference throughput exceeding 100,000 nodes per second. To our knowledge, it is the first approach to jointly deliver high accuracy and engineering deployability at the scale of hundreds of thousands of loads.
📝 Abstract
Probabilistic load forecasting is widely studied and underpins power system planning, operation, and risk-aware decision making. Deep learning forecasters have shown strong ability to capture complex temporal and contextual patterns, achieving substantial accuracy gains. However, at the scale of thousands or even hundreds of thousands of loads in large distribution feeders, a deployment dilemma emerges: training and maintaining one model per customer is computationally and storage intensive, while using a single global model ignores distributional shifts across customer types, locations, and phases. Prior work typically focuses on single-load forecasters, global models across multiple loads, or adaptive/personalized models for relatively small settings, and rarely addresses the combined challenges of heterogeneity and scalability in large feeders. We propose M2OE2-GL, a global-to-local extension of the M2OE2 probabilistic forecaster. We first pretrain a single global M2OE2 base model across all feeder loads, then apply lightweight fine-tuning to derive a compact family of group-specific forecasters. Evaluated on realistic utility data, M2OE2-GL yields substantial error reductions while remaining scalable to very large numbers of loads.