Sparsification Under Siege: Defending Against Poisoning Attacks in Communication-Efficient Federated Learning

πŸ“… 2025-04-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In sparse federated learning (FL), poisoning attacks exploit update sparsity to evade detection, rendering existing defenses ineffective. To address this, we propose FLAREβ€”a robust, communication-efficient defense framework that imposes no additional communication overhead. FLARE’s core innovation lies in the first joint exploitation of (i) sparse index mask consistency checking and (ii) sign-level similarity analysis of model updates, enabling a lightweight, yet effective, anomaly detection mechanism. This is synergistically combined with a consensus-based client filtering strategy. Extensive evaluation across multiple benchmark datasets and under diverse strong adversarial poisoning attacks demonstrates that FLARE improves defense success rate by over 40% compared to state-of-the-art methods, while strictly preserving the same communication efficiency as baseline sparse FL. FLARE thus bridges a critical gap in the security of sparse federated learning.

Technology Category

Application Category

πŸ“ Abstract
Federated Learning (FL) enables collaborative model training across distributed clients while preserving data privacy, yet it faces significant challenges in communication efficiency and vulnerability to poisoning attacks. While sparsification techniques mitigate communication overhead by transmitting only critical model parameters, they inadvertently amplify security risks: adversarial clients can exploit sparse updates to evade detection and degrade model performance. Existing defense mechanisms, designed for standard FL communication scenarios, are ineffective in addressing these vulnerabilities within sparsified FL. To bridge this gap, we propose FLARE, a novel federated learning framework that integrates sparse index mask inspection and model update sign similarity analysis to detect and mitigate poisoning attacks in sparsified FL. Extensive experiments across multiple datasets and adversarial scenarios demonstrate that FLARE significantly outperforms existing defense strategies, effectively securing sparsified FL against poisoning attacks while maintaining communication efficiency.
Problem

Research questions and friction points this paper is trying to address.

Defending against poisoning attacks in federated learning
Addressing security risks in sparsified FL updates
Ensuring communication efficiency while mitigating adversarial exploits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates sparse index mask inspection
Uses model update sign similarity analysis
Detects and mitigates poisoning attacks effectively
πŸ”Ž Similar Papers
No similar papers found.