Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing poisoning attacks in federated learning often cause uncontrolled denial-of-service effects, lacking precise and stealthy control over the extent of global model performance degradation. This paper proposes FedSA—the first poisoning attack framework integrating Sliding Mode Control (SMC) theory—enabling preset objectives (e.g., exact 10% reduction in global accuracy), asymptotic convergence, and rigorous error-bound guarantees. With only 5% malicious clients, FedSA achieves millimeter-level precision control (mean absolute error <0.3%) on benchmark datasets such as CIFAR-10, while remaining highly stealthy against mainstream detection mechanisms. The core innovation lies in deeply unifying robust nonlinear control principles with perturbation modeling of model updates, thereby enabling the first closed-loop poisoning attack that is configurable, verifiable, and highly stealthy.

Technology Category

Application Category

📝 Abstract
Manipulation of local training data and local updates, i.e., the poisoning attack, is the main threat arising from the collaborative nature of the federated learning (FL) paradigm. Most existing poisoning attacks aim to manipulate local data/models in a way that causes denial-of-service (DoS) issues. In this paper, we introduce a novel attack method, named Federated Learning Sliding Attack (FedSA) scheme, aiming at precisely introducing the extent of poisoning in a subtle controlled manner. It operates with a predefined objective, such as reducing global model's prediction accuracy by 10%. FedSA integrates robust nonlinear control-Sliding Mode Control (SMC) theory with model poisoning attacks. It can manipulate the updates from malicious clients to drive the global model towards a compromised state, achieving this at a controlled and inconspicuous rate. Additionally, leveraging the robust control properties of FedSA allows precise control over the convergence bounds, enabling the attacker to set the global accuracy of the poisoned model to any desired level. Experimental results demonstrate that FedSA can accurately achieve a predefined global accuracy with fewer malicious clients while maintaining a high level of stealth and adjustable learning rates.
Problem

Research questions and friction points this paper is trying to address.

Precisely control poisoning extent in federated learning
Achieve predefined global model accuracy reduction
Manipulate updates to compromise global model stealthily
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sliding Mode Control for precise poisoning
Controlled manipulation of model updates
Adjustable global accuracy convergence bounds
🔎 Similar Papers
No similar papers found.