Buffer is All You Need: Defending Federated Learning against Backdoor Attacks under Non-iids via Buffering

📅 2025-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the failure of existing backdoor defense methods in non-iid federated learning (FL) settings, this paper proposes FLBuff—a novel defense framework. It is the first to model non-iid data as isotropic expansion in the representation space and backdoor triggers as unidirectional shifts; leveraging this insight, FLBuff introduces a learnable supervised contrastive learning buffer layer at the penultimate network layer. This buffer enables geometric separation between malicious and benign client updates prior to federated aggregation, effectively overcoming the sharp decline in discriminative capability exhibited by state-of-the-art methods under non-iid conditions. Integrated into the update filtering pipeline before aggregation, the buffer layer enhances robustness without requiring access to clean validation data or server-side fine-tuning. Extensive evaluations on non-iid benchmarks—including CIFAR-10, CIFAR-100, and Tiny-ImageNet—demonstrate that FLBuff achieves 12.7%–23.4% higher backdoor removal rates than SOTA defenses, while incurring less than 1.2% degradation in global model accuracy.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) is a popular paradigm enabling clients to jointly train a global model without sharing raw data. However, FL is known to be vulnerable towards backdoor attacks due to its distributed nature. As participants, attackers can upload model updates that effectively compromise FL. What's worse, existing defenses are mostly designed under independent-and-identically-distributed (iid) settings, hence neglecting the fundamental non-iid characteristic of FL. Here we propose FLBuff for tackling backdoor attacks even under non-iids. The main challenge for such defenses is that non-iids bring benign and malicious updates closer, hence harder to separate. FLBuff is inspired by our insight that non-iids can be modeled as omni-directional expansion in representation space while backdoor attacks as uni-directional. This leads to the key design of FLBuff, i.e., a supervised-contrastive-learning model extracting penultimate-layer representations to create a large in-between buffer layer. Comprehensive evaluations demonstrate that FLBuff consistently outperforms state-of-the-art defenses.
Problem

Research questions and friction points this paper is trying to address.

Defending federated learning against backdoor attacks
Addressing non-IID data distribution challenges
Separating malicious and benign model updates effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Buffering layer for non-iid FL defense
Supervised contrastive learning for representation extraction
Separating omni-directional and uni-directional model updates
🔎 Similar Papers
No similar papers found.
X
Xingyu Lyu
Miner School of Computer and Information Sciences, University of Massachusetts Lowell, USA
N
Ning Wang
Department of Computer Science and Engineering, University of South Florida, USA
Y
Yang Xiao
Department of Computer Science, University of Kentucky
S
Shixiong Li
Miner School of Computer and Information Sciences, University of Massachusetts Lowell, USA
T
Tao Li
Department of Computer and Information Technology, Purdue University, USA
Danjue Chen
Danjue Chen
North Carolina State University
Connected-Automated VehicleHuman-Automation InteractionSmart Citeis
Yimin Chen
Yimin Chen
City University of Hong Kong
Medical imagingComputer Vision