🤖 AI Summary
This work investigates how training hyperparameters—specifically learning rate, batch size, and local epochs—in horizontal federated learning (HFL) affect the robustness of backdoor attacks and defenses. It reveals that benign clients’ hyperparameter choices alone can significantly suppress backdoor attacks without requiring additional defense mechanisms. Through rigorous theoretical analysis and comprehensive empirical evaluation, the study assesses the efficacy of mainstream backdoor attack-defense combinations under diverse hyperparameter configurations, challenging the prevailing paradigm in federated security research that largely overlooks hyperparameter design. Key findings show that judicious hyperparameter tuning reduces the lifespan of a strong A3FL attack by 98.6%, while degrading clean-task accuracy by only 2.9 percentage points. This is the first systematic study to demonstrate that hyperparameters serve as a lightweight, non-intrusive, and practically effective security-enhancement mechanism in HFL.
📝 Abstract
Horizontal Federated Learning (HFL) is particularly vulnerable to backdoor attacks as adversaries can easily manipulate both the training data and processes to execute sophisticated attacks. In this work, we study the impact of training hyperparameters on the effectiveness of backdoor attacks and defenses in HFL. More specifically, we show both analytically and by means of measurements that the choice of hyperparameters by benign clients does not only influence model accuracy but also significantly impacts backdoor attack success. This stands in sharp contrast with the multitude of contributions in the area of HFL security, which often rely on custom ad-hoc hyperparameter choices for benign clients$unicode{x2013}$leading to more pronounced backdoor attack strength and diminished impact of defenses. Our results indicate that properly tuning benign clients' hyperparameters$unicode{x2013}$such as learning rate, batch size, and number of local epochs$unicode{x2013}$can significantly curb the effectiveness of backdoor attacks, regardless of the malicious clients' settings. We support this claim with an extensive robustness evaluation of state-of-the-art attack-defense combinations, showing that carefully chosen hyperparameters yield across-the-board improvements in robustness without sacrificing main task accuracy. For example, we show that the 50%-lifespan of the strong A3FL attack can be reduced by 98.6%, respectively$unicode{x2013}$all without using any defense and while incurring only a 2.9 percentage points drop in clean task accuracy.