FAROS: Robust Federated Learning with Adaptive Scaling against Backdoor Attacks

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of federated learning to backdoor attacks, where malicious clients inject triggers to compromise the global model. Existing defenses often rely on fixed parameters, making them susceptible to single points of failure. To overcome this limitation, the authors propose FAROS, a novel defense framework that integrates Adaptive Differential Scaling (ADS) with Robust Coreset Computation (RCC). FAROS dynamically adjusts defense sensitivity during each aggregation round and constructs a robust aggregation center based on high-confidence clients. This approach effectively mitigates single-point failure risks and significantly enhances resilience against stealthy and efficient backdoor attacks. Extensive experiments demonstrate that FAROS consistently reduces attack success rates across diverse datasets, models, and attack scenarios while maintaining or even improving main-task accuracy, outperforming current state-of-the-art defense methods.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) enables multiple clients to collaboratively train a shared model without exposing local data. However, backdoor attacks pose a significant threat to FL. These attacks aim to implant a stealthy trigger into the global model, causing it to mislead on inputs that possess a specific trigger while functioning normally on benign data. Although pre-aggregation detection is a main defense direction, existing state-of-the-art defenses often rely on fixed defense parameters. This reliance makes them vulnerable to single-point-of-failure risks, rendering them less effective against sophisticated attackers. To address these limitations, we propose FAROS, an enhanced FL framework that incorporates Adaptive Differential Scaling (ADS) and Robust Core-set Computing (RCC). The ADS mechanism adjusts the defense's sensitivity dynamically, based on the dispersion of uploaded gradients by clients in each round. This allows it to counter attackers who strategically shift between stealthiness and effectiveness. Furthermore, the RCC effectively mitigates the risk of single-point failure by computing the centroid of a core set comprising clients with the highest confidence. We conducted extensive experiments across various datasets, models, and attack scenarios. The results demonstrate that our method outperforms current defenses in both attack success rate and main task accuracy.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Backdoor Attacks
Robust Defense
Single-point-of-failure
Adaptive Scaling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Differential Scaling
Robust Core-set Computing
Federated Learning
Backdoor Defense
Dynamic Sensitivity
🔎 Similar Papers
No similar papers found.