🤖 AI Summary
Addressing the challenge of simultaneously achieving robust malicious client detection and fair contribution assessment under secure aggregation in federated learning, this paper proposes an end-to-end monitoring framework that synergistically integrates QI-based anomaly detection with FedGT-based contribution evaluation. To the best of our knowledge, this is the first work to jointly embed both components within a unified architecture, leveraging secure aggregation protocols, randomized client selection, and reputation-guided behavioral verification—ensuring strict privacy preservation while enabling concurrent robust anomaly identification and equitable contribution quantification. Experimental results demonstrate that our method significantly improves malicious behavior detection accuracy over standalone QI or FedGT approaches, and increases Kendall’s τ correlation for contribution ranking by 12.7%, thereby overcoming the functional incompleteness inherent in isolated methods. This work establishes a novel paradigm for trustworthy federated learning in privacy-sensitive settings.
📝 Abstract
Federated learning with secure aggregation enables private and collaborative learning from decentralised data without leaking sensitive client information. However, secure aggregation also complicates the detection of malicious client behaviour and the evaluation of individual client contributions to the learning. To address these challenges, QI (Pejo et al.) and FedGT (Xhemrishi et al.) were proposed for contribution evaluation (CE) and misbehaviour detection (MD), respectively. QI, however, lacks adequate MD accuracy due to its reliance on the random selection of clients in each training round, while FedGT lacks the CE ability. In this work, we combine the strengths of QI and FedGT to achieve both robust MD and accurate CE. Our experiments demonstrate superior performance compared to using either method independently.