🤖 AI Summary
Traditional federated learning faces challenges in model quality assessment, adaptability to non-IID data, and defense against malicious clients, resulting in slow convergence and poor robustness. To address these issues, we propose FedTest, a novel framework introducing a distributed cross-testing mechanism: each client trains its local model while simultaneously evaluating models uploaded by other clients using its own local data—enabling decentralized, quantitative model quality assessment and real-time identification of malicious models. This mechanism natively embeds robust evaluation into the training pipeline, supporting trust-aware weighted aggregation and dynamic adversarial detection. Extensive experiments demonstrate that FedTest significantly accelerates convergence, maintains high accuracy under severe non-IID conditions, and exhibits strong robustness against diverse attacks—including data poisoning and backdoor injection—without requiring trusted servers or prior knowledge of attack patterns.
📝 Abstract
Federated Learning (FL) has emerged as a significant paradigm for training machine learning models. This is due to its data-privacy-preserving property and its efficient exploitation of distributed computational resources. This is achieved by conducting the training process in parallel at distributed users. However, traditional FL strategies grapple with difficulties in evaluating the quality of received models, handling unbalanced models, and reducing the impact of detrimental models. To resolve these problems, we introduce a novel federated learning framework, which we call federated testing for federated learning (FedTest). In the FedTest method, the local data of a specific user is used to train the model of that user and test the models of the other users. This approach enables users to test each other's models and determine an accurate score for each. This score can then be used to aggregate the models efficiently and identify any malicious ones. Our numerical results reveal that the proposed method not only accelerates convergence rates but also diminishes the potential influence of malicious users. This significantly enhances the overall efficiency and robustness of FL systems.