🤖 AI Summary
To address Byzantine client-initiated poisoning attacks in federated learning, this paper proposes a two-tier robust defense framework. The first tier employs Symbolic Aggregate approXimation (SAX) transformation combined with spectral clustering to achieve high-accuracy detection of anomalous clients. The second tier introduces a frequency-domain robust aggregation mechanism based on the Fast Fourier Transform (FFT), effectively suppressing the impact of malicious model updates. By enhancing the separability between benign and malicious model updates, the framework significantly improves system resilience against adversarial attacks. Extensive evaluations under five representative poisoning attack scenarios demonstrate that our approach outperforms existing state-of-the-art methods in both anomaly detection accuracy and final model performance. Notably, it maintains stable, high-level robustness even under severe adversarial conditions—e.g., with up to 50% malicious clients—thereby offering strong practical viability for real-world federated learning deployments.
📝 Abstract
Federated Learning (FL) has become a powerful technique for training Machine Learning (ML) models in a decentralized manner, preserving the privacy of the training datasets involved. However, the decentralized nature of FL limits the visibility of the training process, relying heavily on the honesty of participating clients. This assumption opens the door to malicious third parties, known as Byzantine clients, which can poison the training process by submitting false model updates. Such malicious clients may engage in poisoning attacks, manipulating either the dataset or the model parameters to induce misclassification. In response, this study introduces FLAegis, a two-stage defensive framework designed to identify Byzantine clients and improve the robustness of FL systems. Our approach leverages symbolic time series transformation (SAX) to amplify the differences between benign and malicious models, and spectral clustering, which enables accurate detection of adversarial behavior. Furthermore, we incorporate a robust FFT-based aggregation function as a final layer to mitigate the impact of those Byzantine clients that manage to evade prior defenses. We rigorously evaluate our method against five poisoning attacks, ranging from simple label flipping to adaptive optimization-based strategies. Notably, our approach outperforms state-of-the-art defenses in both detection precision and final model accuracy, maintaining consistently high performance even under strong adversarial conditions.