Federated Learning for Anomaly Detection in Energy Consumption Data: Assessing the Vulnerability to Adversarial Attacks

📅 2025-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior work has overlooked the adversarial robustness of federated learning (FL) models for anomaly detection in energy consumption time series. Method: This study systematically evaluates the vulnerability of LSTM and Transformer architectures under FL to white-box adversarial attacks—specifically FGSM and PGD—using real-world energy data. Contribution/Results: We provide the first empirical evidence in the energy domain that FL frameworks significantly exacerbate model susceptibility to iterative PGD attacks, causing an average accuracy drop exceeding 10% and yielding lower overall robustness than centralized learning. This reveals critical security vulnerabilities in FL-based anomaly detection when deployed in practical energy systems. The findings underscore an urgent need for attack-specific defense mechanisms and deliver essential empirical insights and strategic guidance for building trustworthy, privacy-preserving AI in intelligent energy systems.

Technology Category

Application Category

📝 Abstract
Anomaly detection is crucial in the energy sector to identify irregular patterns indicating equipment failures, energy theft, or other issues. Machine learning techniques for anomaly detection have achieved great success, but are typically centralized, involving sharing local data with a central server which raises privacy and security concerns. Federated Learning (FL) has been gaining popularity as it enables distributed learning without sharing local data. However, FL depends on neural networks, which are vulnerable to adversarial attacks that manipulate data, leading models to make erroneous predictions. While adversarial attacks have been explored in the image domain, they remain largely unexplored in time series problems, especially in the energy domain. Moreover, the effect of adversarial attacks in the FL setting is also mostly unknown. This paper assesses the vulnerability of FL-based anomaly detection in energy data to adversarial attacks. Specifically, two state-of-the-art models, Long Short Term Memory (LSTM) and Transformers, are used to detect anomalies in an FL setting, and two white-box attack methods, Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), are employed to perturb the data. The results show that FL is more sensitive to PGD attacks than to FGSM attacks, attributed to PGD's iterative nature, resulting in an accuracy drop of over 10% even with naive, weaker attacks. Moreover, FL is more affected by these attacks than centralized learning, highlighting the need for defense mechanisms in FL.
Problem

Research questions and friction points this paper is trying to address.

Assesses FL vulnerability to adversarial attacks.
Explores adversarial effects on energy data anomalies.
Compares FL and centralized learning attack impacts.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning for anomaly detection
LSTM and Transformers in FL
Evaluating PGD and FGSM attack impacts
🔎 Similar Papers
No similar papers found.