🤖 AI Summary
This work addresses the research gap in model poisoning attacks against decentralized federated learning (DFL). We propose DMPA—the first differential-driven poisoning attack framework designed for coordinated multi-malicious-client scenarios in DFL. Unlike centralized FL, where poisoning typically targets a single server, DMPA introduces gradient/parameter discrepancy modeling among malicious clients, enabling collaborative optimization and gradient alignment. It enforces L2-norm constraints and iterative objective optimization to achieve precise, stealthy poisoning. Evaluated on multiple benchmark datasets, DMPA reduces global model accuracy by an average of 32.7%, significantly outperforming existing state-of-the-art methods. Our results expose a previously unrecognized security vulnerability in DFL—namely, its susceptibility to coordinated adversarial manipulation in the absence of a central orchestrating server.
📝 Abstract
Federated learning (FL) has garnered significant attention as a prominent privacy-preserving Machine Learning (ML) paradigm. Decentralized FL (DFL) eschews traditional FL's centralized server architecture, enhancing the system's robustness and scalability. However, these advantages of DFL also create new vulnerabilities for malicious participants to execute adversarial attacks, especially model poisoning attacks. In model poisoning attacks, malicious participants aim to diminish the performance of benign models by creating and disseminating the compromised model. Existing research on model poisoning attacks has predominantly concentrated on undermining global models within the Centralized FL (CFL) paradigm, while there needs to be more research in DFL. To fill the research gap, this paper proposes an innovative model poisoning attack called DMPA. This attack calculates the differential characteristics of multiple malicious client models and obtains the most effective poisoning strategy, thereby orchestrating a collusive attack by multiple participants. The effectiveness of this attack is validated across multiple datasets, with results indicating that the DMPA approach consistently surpasses existing state-of-the-art FL model poisoning attack strategies.