🤖 AI Summary
To address inefficient client selection and insufficient privacy guarantees in federated learning for network intrusion detection, this paper proposes an adaptive client selection framework integrating differential privacy and fault tolerance. Methodologically, it introduces: (1) a novel performance-driven, constraint-aware dynamic client population sizing mechanism; (2) explicit joint optimization of the privacy–efficiency–fault-tolerance trade-off, concurrently tuning privacy budgets, model accuracy, and system robustness; and (3) Mann–Whitney U tests to guide noise calibration and fault-tolerant decision-making. Experiments on UNSW-NB15 and ROAD demonstrate that our method achieves up to 7% higher accuracy and 25% faster training than FedL2P. Accuracy improves significantly with increased privacy budget, while the fault-tolerance mechanism enhances system robustness substantially—albeit with statistically acceptable accuracy degradation (p < 0.05).
📝 Abstract
Federated Learning (FL) has become a widely used approach for training machine learning models on decentralized data, addressing the significant privacy concerns associated with traditional centralized methods. However, the efficiency of FL relies on effective client selection and robust privacy preservation mechanisms. Ineffective client selection can result in suboptimal model performance, while inadequate privacy measures risk exposing sensitive data. This paper introduces a client selection framework for FL that incorporates differential privacy and fault tolerance. The proposed adaptive approach dynamically adjusts the number of selected clients based on model performance and system constraints, ensuring privacy through the addition of calibrated noise. The method is evaluated on a network anomaly detection use case using the UNSW-NB15 and ROAD datasets. Results demonstrate up to a 7% improvement in accuracy and a 25% reduction in training time compared to the FedL2P approach. Additionally, the study highlights trade-offs between privacy budgets and model performance, with higher privacy budgets leading to reduced noise and improved accuracy. While the fault tolerance mechanism introduces a slight performance decrease, it enhances robustness against client failures. Statistical validation using the Mann-Whitney U test confirms the significance of these improvements, with results achieving a p-value of less than 0.05.