Impact of Data Poisoning Attacks on Feasibility and Optimality of Neural Power System Optimizers

📅 2025-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates the feasibility and optimality impacts of data poisoning attacks on neural network–based surrogates for direct-current optimal power flow (DC-OPF). Addressing a critical gap, it evaluates three representative surrogate architectures—penalty-based, post-hoc correction, and direct mapping—within a unified analytical framework comprising poison sample generation, constraint feasibility verification, and robustness quantification. Results demonstrate that the post-hoc correction method exhibits the strongest robustness in preserving solution feasibility under attack; the direct mapping approach incurs the smallest optimality loss; and the penalty-based method performs worst overall. The analysis uncovers distinct vulnerability mechanisms across architectures, revealing how structural design choices govern susceptibility to data poisoning. These findings provide both theoretical insights and empirical evidence to guide the development of robust, security-aware machine learning–enabled OPF solvers for modern power systems.

Technology Category

Application Category

📝 Abstract
The increased integration of clean yet stochastic energy resources and the growing number of extreme weather events are narrowing the decision-making window of power grid operators. This time constraint is fueling a plethora of research on Machine Learning-, or ML-, based optimization proxies. While finding a fast solution is appealing, the inherent vulnerabilities of the learning-based methods are hindering their adoption. One of these vulnerabilities is data poisoning attacks, which adds perturbations to ML training data, leading to incorrect decisions. The impact of poisoning attacks on learning-based power system optimizers have not been thoroughly studied, which creates a critical vulnerability. In this paper, we examine the impact of data poisoning attacks on ML-based optimization proxies that are used to solve the DC Optimal Power Flow problem. Specifically, we compare the resilience of three different methods-a penalty-based method, a post-repair approach, and a direct mapping approach-against the adverse effects of poisoning attacks. We will use the optimality and feasibility of these proxies as performance metrics. The insights of this work will establish a foundation for enhancing the resilience of neural power system optimizers.
Problem

Research questions and friction points this paper is trying to address.

Data poisoning attacks on ML optimizers
Impact on DC Optimal Power Flow
Resilience comparison of three methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data poisoning attack analysis
Resilience comparison methods
Neural optimizer performance metrics
🔎 Similar Papers
No similar papers found.