DMPA: Model Poisoning Attacks on Decentralized Federated Learning for Model Differences

📅 2025-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the research gap in model poisoning attacks against decentralized federated learning (DFL). We propose DMPA—the first differential-driven poisoning attack framework designed for coordinated multi-malicious-client scenarios in DFL. Unlike centralized FL, where poisoning typically targets a single server, DMPA introduces gradient/parameter discrepancy modeling among malicious clients, enabling collaborative optimization and gradient alignment. It enforces L2-norm constraints and iterative objective optimization to achieve precise, stealthy poisoning. Evaluated on multiple benchmark datasets, DMPA reduces global model accuracy by an average of 32.7%, significantly outperforming existing state-of-the-art methods. Our results expose a previously unrecognized security vulnerability in DFL—namely, its susceptibility to coordinated adversarial manipulation in the absence of a central orchestrating server.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) has garnered significant attention as a prominent privacy-preserving Machine Learning (ML) paradigm. Decentralized FL (DFL) eschews traditional FL's centralized server architecture, enhancing the system's robustness and scalability. However, these advantages of DFL also create new vulnerabilities for malicious participants to execute adversarial attacks, especially model poisoning attacks. In model poisoning attacks, malicious participants aim to diminish the performance of benign models by creating and disseminating the compromised model. Existing research on model poisoning attacks has predominantly concentrated on undermining global models within the Centralized FL (CFL) paradigm, while there needs to be more research in DFL. To fill the research gap, this paper proposes an innovative model poisoning attack called DMPA. This attack calculates the differential characteristics of multiple malicious client models and obtains the most effective poisoning strategy, thereby orchestrating a collusive attack by multiple participants. The effectiveness of this attack is validated across multiple datasets, with results indicating that the DMPA approach consistently surpasses existing state-of-the-art FL model poisoning attack strategies.
Problem

Research questions and friction points this paper is trying to address.

Addresses vulnerabilities in Decentralized Federated Learning
Proposes DMPA for effective model poisoning attacks
Validates DMPA’s superiority over existing attack strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized Federated Learning
Model Poisoning Attacks
Differential Characteristics Calculation
🔎 Similar Papers
2024-04-24Computer Vision and Pattern RecognitionCitations: 12
Chao Feng
Chao Feng
University of Zurich
networkmachine learningcybersecurity
Y
Yunlong Li
Communication Systems Group, Department of Informatics, University of Zürich, Binzmühlestrasse 14, CH-8050 Zürich, Switzerland
Y
Yuanzhe Gao
Communication Systems Group, Department of Informatics, University of Zürich, Binzmühlestrasse 14, CH-8050 Zürich, Switzerland
A
Alberto Huertas Celdrán
Communication Systems Group, Department of Informatics, University of Zürich, Binzmühlestrasse 14, CH-8050 Zürich, Switzerland
Jan von der Assen
Jan von der Assen
University of Zurich
Gérôme Bovet
Gérôme Bovet
armasuisse, Cyber-Defence Campus
Cyber SecurityData ScienceComputer NetworksWireless Communication
B
Burkhard Stiller
Communication Systems Group, Department of Informatics, University of Zürich, Binzmühlestrasse 14, CH-8050 Zürich, Switzerland