Adaptive Decentralized Federated Learning for Robust Optimization

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In decentralized federated learning (DFL), anomalous clients—induced by noisy or poisoned data—severely degrade model robustness and convergence. To address this, we propose a fully adaptive learning rate scheduling method that requires no prior knowledge of benign clients and imposes no assumptions on neighbor count. This is the first approach in DFL to achieve *unconditional adaptivity*: it dynamically assesses gradient credibility and scales learning rates per client without supervision. Our method integrates robust statistics with distributed optimization to detect and suppress anomalous gradients in real time. We provide theoretical guarantees proving convergence and establishing optimal estimation rates under standard assumptions. Extensive experiments demonstrate that our method consistently outperforms state-of-the-art baselines across diverse data poisoning and noise attack scenarios, achieving significant improvements in both test accuracy and robustness.

Technology Category

Application Category

📝 Abstract
In decentralized federated learning (DFL), the presence of abnormal clients, often caused by noisy or poisoned data, can significantly disrupt the learning process and degrade the overall robustness of the model. Previous methods on this issue often require a sufficiently large number of normal neighboring clients or prior knowledge of reliable clients, which reduces the practical applicability of DFL. To address these limitations, we develop here a novel adaptive DFL (aDFL) approach for robust estimation. The key idea is to adaptively adjust the learning rates of clients. By assigning smaller rates to suspicious clients and larger rates to normal clients, aDFL mitigates the negative impact of abnormal clients on the global model in a fully adaptive way. Our theory does not put any stringent conditions on neighboring nodes and requires no prior knowledge. A rigorous convergence analysis is provided to guarantee the oracle property of aDFL. Extensive numerical experiments demonstrate the superior performance of the aDFL method.
Problem

Research questions and friction points this paper is trying to address.

Mitigates abnormal client impact in decentralized federated learning
Adaptively adjusts client learning rates for robust optimization
Eliminates need for prior knowledge or stringent neighbor conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptively adjusts client learning rates
Mitigates abnormal client impact adaptively
Requires no prior knowledge or stringent conditions
🔎 Similar Papers
No similar papers found.
Shuyuan Wu
Shuyuan Wu
School of Statistics and Data Science, Shanghai University of Finance and Economics
Large Dataset AnalysisSubsamplingDistributed Computing
F
Feifei Wang
School of Statistics, Renmin University of China, China
Y
Yuan Gao
School of Statistics and Data Science, Shanghai University of International Business and Economics, China
Hansheng Wang
Hansheng Wang
Guanghua School of Management, Peking University
Statistics in Business