🤖 AI Summary
In decentralized federated learning (DFL), anomalous clients—induced by noisy or poisoned data—severely degrade model robustness and convergence. To address this, we propose a fully adaptive learning rate scheduling method that requires no prior knowledge of benign clients and imposes no assumptions on neighbor count. This is the first approach in DFL to achieve *unconditional adaptivity*: it dynamically assesses gradient credibility and scales learning rates per client without supervision. Our method integrates robust statistics with distributed optimization to detect and suppress anomalous gradients in real time. We provide theoretical guarantees proving convergence and establishing optimal estimation rates under standard assumptions. Extensive experiments demonstrate that our method consistently outperforms state-of-the-art baselines across diverse data poisoning and noise attack scenarios, achieving significant improvements in both test accuracy and robustness.
📝 Abstract
In decentralized federated learning (DFL), the presence of abnormal clients, often caused by noisy or poisoned data, can significantly disrupt the learning process and degrade the overall robustness of the model. Previous methods on this issue often require a sufficiently large number of normal neighboring clients or prior knowledge of reliable clients, which reduces the practical applicability of DFL. To address these limitations, we develop here a novel adaptive DFL (aDFL) approach for robust estimation. The key idea is to adaptively adjust the learning rates of clients. By assigning smaller rates to suspicious clients and larger rates to normal clients, aDFL mitigates the negative impact of abnormal clients on the global model in a fully adaptive way. Our theory does not put any stringent conditions on neighboring nodes and requires no prior knowledge. A rigorous convergence analysis is provided to guarantee the oracle property of aDFL. Extensive numerical experiments demonstrate the superior performance of the aDFL method.