Stealth by Conformity: Evading Robust Aggregation through Adaptive Poisoning

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional robust aggregation methods in federated learning assume malicious model updates must be out-of-distribution outliers, enabling detection via outlier filtering. This work challenges that foundational assumption, demonstrating that adversarial updates can be crafted to reside *within* the benign parameter distribution—thereby evading existing defenses. Method: We propose an adaptive stealthy poisoning attack that exploits side-channel feedback from the aggregation process (e.g., whether an update is accepted) to iteratively optimize a local loss function—simultaneously minimizing statistical anomaly and maximizing poisoning efficacy. Unlike prior attacks, it does not rely on outlier behavior but achieves high stealth through online adaptation. Contribution/Results: Evaluated across two benchmark datasets against nine state-of-the-art robust aggregators, our attack increases average success rate by 47.07%. It is the first systematic demonstration of the intrinsic vulnerability of distribution-agnostic defenses, providing both a critical counterexample and theoretical impetus for developing distribution-aware robust aggregation mechanisms.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) is a distributed learning paradigm designed to address privacy concerns. However, FL is vulnerable to poisoning attacks, where Byzantine clients compromise the integrity of the global model by submitting malicious updates. Robust aggregation methods have been widely adopted to mitigate such threats, relying on the core assumption that malicious updates are inherently out-of-distribution and can therefore be identified and excluded before aggregating client updates. In this paper, we challenge this underlying assumption by showing that a model can be poisoned while keeping malicious updates within the main distribution. We propose Chameleon Poisoning (CHAMP), an adaptive and evasive poisoning strategy that exploits side-channel feedback from the aggregation process to guide the attack. Specifically, the adversary continuously infers whether its malicious contribution has been incorporated into the global model and adapts accordingly. This enables a dynamic adjustment of the local loss function, balancing a malicious component with a camouflaging component, thereby increasing the effectiveness of the poisoning while evading robust aggregation defenses. CHAMP enables more effective and evasive poisoning, highlighting a fundamental limitation of existing robust aggregation defenses and underscoring the need for new strategies to secure federated learning against sophisticated adversaries. Our approach is evaluated in two datasets reaching an average increase of 47.07% in attack success rate against nine robust aggregation defenses.
Problem

Research questions and friction points this paper is trying to address.

Evading robust aggregation defenses in federated learning
Adaptive poisoning strategy using side-channel feedback
Maintaining malicious updates within main distribution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive poisoning strategy exploiting side-channel feedback
Dynamic adjustment balancing malicious and camouflaging components
Evades robust aggregation by keeping updates in-distribution
🔎 Similar Papers
No similar papers found.
R
Ryan McGaughey
Centre for Secure Information Technologies (CSIT), Queen’s University Belfast, UK
J
Jesus Martinez del Rincon
Centre for Secure Information Technologies (CSIT), Queen’s University Belfast, UK
Ihsen Alouani
Ihsen Alouani
CSIT, Queen's University Belfast, UK
ML/AI Security & PrivacySystems Security