Model Poisoning Attacks to Federated Learning via Multi-Round Consistency

📅 2024-04-24
🏛️ Computer Vision and Pattern Recognition
📈 Citations: 12
Influential: 0
📄 PDF
🤖 AI Summary
Existing model poisoning attacks in federated learning suffer from two key limitations: low attack efficacy under defenses and reliance on knowledge of client model updates or local data. This paper proposes the first cross-round malicious client model update consistency mechanism, operating in a fully black-box setting without requiring any client model or data priors—thereby enhancing both stealth and destructive capability. Methodologically, it introduces multi-round gradient/update sequence modeling with consistency-constrained regularization to overcome the self-cancellation bottleneck inherent in single-round attacks, and designs a dynamic malicious weight aggregation strategy. Evaluated on five benchmark datasets, the attack successfully breaks eight state-of-the-art defense mechanisms, significantly outperforms seven existing poisoning attacks, and rapidly adapts to—and breaches—newly customized defenses.

Technology Category

Application Category

📝 Abstract
Model poisoning attacks are critical security threats to Federated Learning (FL). Existing model poisoning attacks suffer from two key limitations: 1) they achieve suboptimal effectiveness when defenses are deployed, and/or 2) they require knowledge of the model updates or local training data on genuine clients. In this work, we make a key observation that their suboptimal effectiveness arises from only leveraging model-update consistency among malicious clients within individual training rounds, making the attack effect self-cancel across training rounds. In light of this observation, we propose PoisonedFL, which enforces multi-round consistency among the malicious clients' model updates while not requiring any knowledge about the genuine clients. Our empirical evaluation on five benchmark datasets shows that PoisonedFL breaks eight state-of-the-art defenses and outperforms seven existing model poisoning attacks. Moreover, we also explore new defenses that are tailored to PoisonedFL, but our results show that we can still adapt PoisonedFL to break them. Our study shows that FL systems are considerably less robust than previously thought, underlining the urgency for the development of new defense mechanisms.
Problem

Research questions and friction points this paper is trying to address.

Overcoming limitations of existing model poisoning attacks in Federated Learning
Enforcing multi-round consistency among malicious clients without genuine client knowledge
Breaking state-of-the-art defenses and demonstrating FL system vulnerability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-round consistency enforcement among malicious clients
No genuine client knowledge required for attack
Breaks eight state-of-the-art defense mechanisms
🔎 Similar Papers
No similar papers found.