🤖 AI Summary
Automatic Generation Control (AGC) systems in smart grids are vulnerable to False Data Injection Attacks (FDIAs), yet existing deep learning–based detection models lack interpretability, hindering operator trust. Method: This paper proposes a lightweight hybrid detection framework integrating machine learning with large language models (LLMs): LightGBM enables millisecond-scale real-time FDIA detection (95.13% accuracy, 4-ms latency), while GPT-4o mini generates natural-language explanations via few-shot prompting—identifying attack targets (93% accuracy), estimating attack magnitude (0.075 pu error), and localizing attack onset time (2.19-s error). Contribution/Results: To the best of our knowledge, this is the first framework unifying high-speed detection with actionable, human-understandable explanations for AGC cybersecurity. It significantly enhances detection transparency and human–AI collaborative decision-making, advancing trustworthy AI deployment in power system cyber-physical security.
📝 Abstract
The increasing digitization of smart grids has improved operational efficiency but also introduced new cybersecurity vulnerabilities, such as False Data Injection Attacks (FDIAs) targeting Automatic Generation Control (AGC) systems. While machine learning (ML) and deep learning (DL) models have shown promise in detecting such attacks, their opaque decision-making limits operator trust and real-world applicability. This paper proposes a hybrid framework that integrates lightweight ML-based attack detection with natural language explanations generated by Large Language Models (LLMs). Classifiers such as LightGBM achieve up to 95.13% attack detection accuracy with only 0.004 s inference latency. Upon detecting a cyberattack, the system invokes LLMs, including GPT-3.5 Turbo, GPT-4 Turbo, and GPT-4o mini, to generate human-readable explanation of the event. Evaluated on 100 test samples, GPT-4o mini with 20-shot prompting achieved 93% accuracy in identifying the attack target, a mean absolute error of 0.075 pu in estimating attack magnitude, and 2.19 seconds mean absolute error (MAE) in estimating attack onset. These results demonstrate that the proposed framework effectively balances real-time detection with interpretable, high-fidelity explanations, addressing a critical need for actionable AI in smart grid cybersecurity.