SecureFed: A Two-Phase Framework for Detecting Malicious Clients in Federated Learning

πŸ“… 2025-06-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Federated learning is vulnerable to malicious client poisoning attacks, degrading model performance and compromising privacy-preserving guarantees. To address this, we propose a two-stage adaptive defense framework that requires neither a trusted server nor prior knowledge of clients. In the first stage, anomaly detection identifies compromised model updates via PCA-based dimensionality reduction and synthetic data validation. In the second stage, we introduce a novel β€œlearning zone” dynamic weight routing mechanism that jointly leverages gradient magnitude and contribution scoring to suppress low-value (potentially malicious) gradient regions. Evaluated across multiple benchmark datasets, our method achieves over 92% poisoning attack mitigation success rate while incurring less than 0.8% global model accuracy degradation. The framework significantly enhances both robustness and practical deployability without introducing substantial computational overhead or trust assumptions.

Technology Category

Application Category

πŸ“ Abstract
Federated Learning (FL) protects data privacy while providing a decentralized method for training models. However, because of the distributed schema, it is susceptible to adversarial clients that could alter results or sabotage model performance. This study presents SecureFed, a two-phase FL framework for identifying and reducing the impact of such attackers. Phase 1 involves collecting model updates from participating clients and applying a dimensionality reduction approach to identify outlier patterns frequently associated with malicious behavior. Temporary models constructed from the client updates are evaluated on synthetic datasets to compute validation losses and support anomaly scoring. The idea of learning zones is presented in Phase 2, where weights are dynamically routed according to their contribution scores and gradient magnitudes. High-value gradient zones are given greater weight in aggregation and contribute more significantly to the global model, while lower-value gradient zones, which may indicate possible adversarial activity, are gradually removed from training. Until the model converges and a strong defense against poisoning attacks is possible, this training cycle continues Based on the experimental findings, SecureFed considerably improves model resilience without compromising model performance.
Problem

Research questions and friction points this paper is trying to address.

Detecting malicious clients in federated learning systems
Reducing impact of adversarial attacks on model performance
Enhancing model resilience without compromising accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-phase FL framework for detecting malicious clients
Dimensionality reduction to identify outlier patterns
Dynamic weight routing based on gradient zones
πŸ”Ž Similar Papers
No similar papers found.
L
Likhitha Annapurna Kavuri
Dept. of Computer Information Systems, Texas A&M University - Central Texas, Texas, USA
A
Akshay Mhatre
Dept. of Computer Information Systems, Texas A&M University - Central Texas, Texas, USA
Akarsh K Nair
Akarsh K Nair
Researcher , Loughborough Univeristy
Edge ComputingFederated LearningPrivacy preservationAI on edge
Deepti Gupta
Deepti Gupta
Texas A&M University-Central Texas
IoT SecurityMachine LearningCloud ComputingAccess ControlAnomaly Detection