🤖 AI Summary
Recent learning-based methods for AC optimal power flow (AC-OPF) claim substantial speedups and high accuracy, yet their practical benefits remain inadequately validated against strong, interpretable baselines. Method: We conduct a systematic empirical evaluation of machine learning approaches for AC-OPF, introducing OPFormer-V—a lightweight, Transformer-based model predicting voltage magnitudes and angles—and rigorously compare it against high-performance linear regression and other strong baselines. Contribution/Results: While OPFormer-V outperforms DeepOPF-V, its relative gains are marginal; crucially, simple linear models achieve comparable accuracy across most test cases while accelerating inference by an order of magnitude. Our analysis reveals that many existing learning-based AC-OPF methods lack sufficiently rigorous baseline comparisons, casting doubt on the necessity of complex deep architectures. We advocate for adopting interpretable, easily deployable strong baselines in evaluation protocols—providing both methodological reflection and concrete guidelines for future research in learning-based power system optimization.
📝 Abstract
Recent work has proposed machine learning (ML) approaches as fast surrogates for solving AC optimal power flow (AC-OPF), with claims of significant speed-ups and high accuracy. In this paper, we revisit these claims through a systematic evaluation of ML models against a set of simple yet carefully designed linear baselines. We introduce OPFormer-V, a transformer-based model for predicting bus voltages, and compare it to both the state-of-the-art DeepOPF-V model and simple linear methods. Our findings reveal that, while OPFormer-V improves over DeepOPF-V, the relative gains of the ML approaches considered are less pronounced than expected. Simple linear baselines can achieve comparable performance. These results highlight the importance of including strong linear baselines in future evaluations.