Large Language Model-Based Framework for Explainable Cyberattack Detection in Automatic Generation Control Systems

📅 2025-07-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automatic Generation Control (AGC) systems in smart grids are vulnerable to False Data Injection Attacks (FDIAs), yet existing deep learning–based detection models lack interpretability, hindering operator trust. Method: This paper proposes a lightweight hybrid detection framework integrating machine learning with large language models (LLMs): LightGBM enables millisecond-scale real-time FDIA detection (95.13% accuracy, 4-ms latency), while GPT-4o mini generates natural-language explanations via few-shot prompting—identifying attack targets (93% accuracy), estimating attack magnitude (0.075 pu error), and localizing attack onset time (2.19-s error). Contribution/Results: To the best of our knowledge, this is the first framework unifying high-speed detection with actionable, human-understandable explanations for AGC cybersecurity. It significantly enhances detection transparency and human–AI collaborative decision-making, advancing trustworthy AI deployment in power system cyber-physical security.

Technology Category

Application Category

📝 Abstract
The increasing digitization of smart grids has improved operational efficiency but also introduced new cybersecurity vulnerabilities, such as False Data Injection Attacks (FDIAs) targeting Automatic Generation Control (AGC) systems. While machine learning (ML) and deep learning (DL) models have shown promise in detecting such attacks, their opaque decision-making limits operator trust and real-world applicability. This paper proposes a hybrid framework that integrates lightweight ML-based attack detection with natural language explanations generated by Large Language Models (LLMs). Classifiers such as LightGBM achieve up to 95.13% attack detection accuracy with only 0.004 s inference latency. Upon detecting a cyberattack, the system invokes LLMs, including GPT-3.5 Turbo, GPT-4 Turbo, and GPT-4o mini, to generate human-readable explanation of the event. Evaluated on 100 test samples, GPT-4o mini with 20-shot prompting achieved 93% accuracy in identifying the attack target, a mean absolute error of 0.075 pu in estimating attack magnitude, and 2.19 seconds mean absolute error (MAE) in estimating attack onset. These results demonstrate that the proposed framework effectively balances real-time detection with interpretable, high-fidelity explanations, addressing a critical need for actionable AI in smart grid cybersecurity.
Problem

Research questions and friction points this paper is trying to address.

Detect cyberattacks in smart grid AGC systems
Improve trust via explainable AI with LLMs
Balance real-time detection and interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid framework with ML and LLMs
LightGBM for fast attack detection
GPT models for explainable AI
🔎 Similar Papers
No similar papers found.
M
Muhammad Sharshar
Department of Computer Science, College of Computing and Mathematical Sciences, Khalifa University, Abu Dhabi, UAE
Ahmad Mohammad Saber
Ahmad Mohammad Saber
University of Toronto
Smart GridsMachine LearningCyber-Physical SecurityMicrogridsRenewable Energy
Davor Svetinovic
Davor Svetinovic
Computer Science, Khalifa University
BlockchainCybersecuritySoftware
A
Amr M. Youssef
Concordia Institute for Information Systems Engineering (CIISE), Concordia University, Montreal, QC, Canada
Deepa Kundur
Deepa Kundur
Canada Research Chair in Cybersecurity of Intelligent Critical Infrastructure, University of Toronto
Cyber-Physical SecuritySmart GridSmart Grid SecurityMental Health InformaticsMultimedia
E
Ehab F. El-Saadany
Department of Electrical Engineering, College of Engineering and Physical Sciences, Khalifa University, Abu Dhabi, UAE