🤖 AI Summary
Achieving Byzantine robustness in federated learning remains challenging under concurrent high fractions of malicious clients and non-i.i.d. data.
Method: This paper proposes an angle-driven robust aggregation algorithm. Its core components include: (1) a ReLU-truncated cosine similarity metric computed on a clean server-side dataset to precisely identify malicious model updates; (2) dynamic reference client selection coupled with angular deviation–inversely proportional weighting to jointly mitigate non-i.i.d. bias and malicious scaling; and (3) update norm normalization to enhance generalization against diverse Byzantine attacks.
Results: Extensive experiments demonstrate that the method significantly outperforms state-of-the-art baselines under five canonical Byzantine attacks, tolerates >50% malicious clients, and maintains high accuracy and strong robustness on complex non-i.i.d. benchmarks including CIFAR-10 and CIFAR-100.
📝 Abstract
Byzantine attacks during model aggregation in Federated Learning (FL) threaten training integrity by manipulating malicious clients' updates. Existing methods struggle with limited robustness under high malicious client ratios and sensitivity to non-i.i.d. data, leading to degraded accuracy. To address this, we propose FLTG, a novel aggregation algorithm integrating angle-based defense and dynamic reference selection. FLTG first filters clients via ReLU-clipped cosine similarity, leveraging a server-side clean dataset to exclude misaligned updates. It then dynamically selects a reference client based on the prior global model to mitigate non-i.i.d. bias, assigns aggregation weights inversely proportional to angular deviations, and normalizes update magnitudes to suppress malicious scaling. Evaluations across datasets of varying complexity under five classic attacks demonstrate FLTG's superiority over state-of-the-art methods under extreme bias scenarios and sustains robustness with a higher proportion(over 50%) of malicious clients.