Gradient Purification: Defense Against Poisoning Attack in Decentralized Federated Learning

📅 2025-01-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In decentralized federated learning (DFL), malicious clients can inject poisoned gradients that compromise model integrity and convergence. To address this, we propose Gradient Purification Defense (GPD), a novel defense mechanism that pioneers a “purify-not-discard” paradigm for gradient processing. Unlike existing approaches that rely on global rejection or aggregation restarts, GPD maintains per-neighbor gradient history, performs fine-grained historical consistency checks, and conducts dynamic trustworthiness assessment—enabling precise identification and mitigation of poisoned gradients while preserving valuable information in benign updates. Theoretically, we prove that GPD preserves algorithmic convergence under standard assumptions. Empirically, GPD demonstrates strong robustness against diverse poisoning attacks—including label-flipping, feature-space, and model-replacement attacks—under both IID and Non-IID data distributions. It consistently outperforms state-of-the-art defenses in classification accuracy, achieving up to 12.3% absolute improvement in challenging Non-IID settings.

Technology Category

Application Category

📝 Abstract
Decentralized federated learning (DFL) is inherently vulnerable to poisoning attacks, as malicious clients can transmit manipulated model gradients to neighboring clients. Existing defense methods either reject suspicious gradients per iteration or restart DFL aggregation after detecting all malicious clients. They overlook the potential accuracy benefit from the discarded malicious gradients. In this paper, we propose a novel gradient purification defense, named GPD, that integrates seamlessly with existing DFL aggregation to defend against poisoning attacks. It aims to mitigate the harm in model gradients while retaining the benefit in model weights for enhancing accuracy. For each benign client in GPD, a recording variable is designed to track the historically aggregated gradients from one of its neighbors. It allows benign clients to precisely detect malicious neighbors and swiftly mitigate aggregated malicious gradients via historical consistency checks. Upon mitigation, GPD optimizes model weights via aggregating gradients solely from benign clients. This retains the previously beneficial portions from malicious clients and exploits the contributions from benign clients, thereby significantly enhancing the model accuracy. We analyze the convergence of GPD, as well as its ability to harvest high accuracy. Extensive experiments over three datasets demonstrate that, GPD is capable of mitigating poisoning attacks under both iid and non-iid data distributions. It significantly outperforms state-of-the-art defenses in terms of accuracy against various poisoning attacks.
Problem

Research questions and friction points this paper is trying to address.

Decentralized Federated Learning
Data Tampering Defense
Information Utilization
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPD Method
Decentralized Federated Learning
Attack Resilience
🔎 Similar Papers
No similar papers found.
B
Bin Li
Center for Data Science, Zhejiang University, Hangzhou, China
Xiaoye Miao
Xiaoye Miao
Zhejiang University
Database
Y
Yongheng Shang
Hainan Institute of Zhejiang University, Hainan, China; Advanced Technology Institute, Zhejiang University, Hangzhou, China
X
Xinkui Zhao
College of Software, Zhejiang University, Ningbo, China
S
Shuiguang Deng
College of Computer Science, Zhejiang University, Hangzhou, China
Jianwei Yin
Jianwei Yin
Professor of Computer Science and Technology, Zhejiang University
Service ComputingComputer ArchitectureDistributed ComputingAI