ProDiGy: Proximity- and Dissimilarity-Based Byzantine-Robust Federated Learning

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the vulnerability of federated learning to Byzantine attacks under non-IID data and the trade-off between robustness and accuracy in existing defenses, this paper proposes ProDiGy. Our method introduces a dual-perspective gradient evaluation framework that jointly models gradient proximity and dissimilarity—capturing both the natural similarity among honest clients and the anomalous consensus induced by malicious clients—to precisely identify and suppress malicious updates. This mechanism is seamlessly integrated into the aggregation process without requiring trusted nodes or additional assumptions about client behavior. Extensive experiments across diverse non-IID settings and standard Byzantine attack scenarios demonstrate that ProDiGy significantly outperforms state-of-the-art defense methods, achieving both high model accuracy and substantially improved robustness.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) emerged as a widely studied paradigm for distributed learning. Despite its many advantages, FL remains vulnerable to adversarial attacks, especially under data heterogeneity. We propose a new Byzantine-robust FL algorithm called ProDiGy. The key novelty lies in evaluating the client gradients using a joint dual scoring system based on the gradients' proximity and dissimilarity. We demonstrate through extensive numerical experiments that ProDiGy outperforms existing defenses in various scenarios. In particular, when the clients' data do not follow an IID distribution, while other defense mechanisms fail, ProDiGy maintains strong defense capabilities and model accuracy. These findings highlight the effectiveness of a dual perspective approach that promotes natural similarity among honest clients while detecting suspicious uniformity as a potential indicator of an attack.
Problem

Research questions and friction points this paper is trying to address.

Detecting adversarial attacks in federated learning
Addressing data heterogeneity vulnerabilities in distributed learning
Ensuring Byzantine robustness with proximity and dissimilarity metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual scoring system for gradients
Proximity and dissimilarity based evaluation
Robust defense in non-IID data
🔎 Similar Papers
No similar papers found.