FedCC: Robust Federated Learning against Model Poisoning Attacks

📅 2022-12-05
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning is highly vulnerable to model poisoning attacks under non-IID data distributions, yet existing defenses typically address data heterogeneity and security threats in isolation. To bridge this gap, we propose FedCC—the first robust aggregation framework that jointly models non-IIDness and poisoning attacks. FedCC leverages Centered Kernel Alignment (CKA) to measure representation similarity at the penultimate layer across clients, enabling precise clustering and identification of malicious participants for effective filtering—thereby suppressing malicious updates while preserving beneficial knowledge transfer. Extensive experiments demonstrate that FedCC reduces attack confidence to zero under both untargeted and backdoor attacks, and cuts average global model performance degradation by 65.5% compared to baseline methods. It significantly outperforms state-of-the-art anomaly detection and first-order statistical defense approaches.
📝 Abstract
Federated learning is a distributed framework designed to address privacy concerns. However, it introduces new attack surfaces, which are especially prone when data is non-Independently and Identically Distributed. Existing approaches fail to effectively mitigate the malicious influence in this setting; previous approaches often tackle non-IID data and poisoning attacks separately. To address both challenges simultaneously, we present FedCC, a simple yet effective novel defense algorithm against model poisoning attacks. It leverages the Centered Kernel Alignment similarity of Penultimate Layer Representations for clustering, allowing the identification and filtration of malicious clients, even in non-IID data settings. The penultimate layer representations are meaningful since the later layers are more sensitive to local data distributions, which allows better detection of malicious clients. The sophisticated utilization of layer-wise Centered Kernel Alignment similarity allows attack mitigation while leveraging useful knowledge obtained. Our extensive experiments demonstrate the effectiveness of FedCC in mitigating both untargeted model poisoning and targeted backdoor attacks. Compared to existing outlier detection-based and first-order statistics-based methods, FedCC consistently reduces attack confidence to zero. Specifically, it significantly minimizes the average degradation of global performance by 65.5%. We believe that this new perspective on aggregation makes it a valuable contribution to the field of FL model security and privacy. The code will be made available upon acceptance.
Problem

Research questions and friction points this paper is trying to address.

Mitigates model poisoning in federated learning
Handles non-IID data and attacks simultaneously
Detects malicious clients via layer similarity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Centered Kernel Alignment
Clusters penultimate layer representations
Mitigates model poisoning effectively
🔎 Similar Papers
No similar papers found.
Hyejun Jeong
Hyejun Jeong
University of Massachusetts, Amherst
Trustworthy AIAI securityData Privacy
H
H. Son
UC Davis
S
Seohu Lee
Johns Hopkins University
J
Jayun Hyun
Hippo T&C Inc.
T
T. Chung
Hippo T&C Inc.