Mitigating Backdoor Attacks in Federated Learning Using PPA and MiniMax Game Theory

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the threat of backdoor attacks launched by malicious clients that compromise the integrity of global models in federated learning. To counter this, the authors propose FedBBA, a novel framework that uniquely integrates Projection Pursuit Analysis (PPA) with minimax game theory to establish a behavior-aware, dynamic defense mechanism. By synergistically combining reputation evaluation and incentive mechanisms, FedBBA enables real-time detection and suppression of malicious client influence. Experimental results on the GTSRB and BTSC datasets demonstrate that FedBBA reduces backdoor attack success rates to 1.1%–11%, substantially outperforming existing defenses such as RDFL and RoPE, while maintaining high main-task accuracy of 95%–98%. This approach thus achieves a robust balance between security and utility in federated learning systems.
📝 Abstract
Federated Learning (FL) is witnessing wider adoption due to its ability to benefit from large amounts of scattered data while preserving privacy. However, despite its advantages, federated learning suffers from several setbacks that directly impact the accuracy, and the integrity of the global model it produces. One of these setbacks is the presence of malicious clients who actively try to harm the global model by injecting backdoor data into their local models while trying to evade detection. The objective of such clients is to trick the global model into making false predictions during inference, thereby compromising the integrity and trustworthiness of the global model on which honest stakeholders rely. To mitigate such mischievous behavior, we propose FedBBA (Federated Backdoor and Behavior Analysis). The proposed model aims to dampen the effect of such clients on the final accuracy, creating more resilient federated learning environments. We engineer our approach through the combination of (1) a reputation system to evaluate and track client behavior, (2) an incentive mechanism to reward honest participation and penalize malicious behavior, and (3) game theoretical models with projection pursuit analysis (PPA) to dynamically identify and minimize the impact of malicious clients on the global model. Extensive simulations on the German Traffic Sign Recognition Benchmark (GTSRB) and Belgium Traffic Sign Classification (BTSC) datasets demonstrate that FedBBA reduces the backdoor attack success rate to approximately 1.1%--11% across various attack scenarios, significantly outperforming state-of-the-art defenses like RDFL and RoPE, which yielded attack success rates between 23% and 76%, while maintaining high normal task accuracy (~95%--98%).
Problem

Research questions and friction points this paper is trying to address.

Backdoor Attacks
Federated Learning
Malicious Clients
Model Integrity
Privacy-Preserving Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning
Backdoor Attack Defense
Projection Pursuit Analysis
MiniMax Game Theory
Reputation System
🔎 Similar Papers
No similar papers found.