🤖 AI Summary
Federated learning under partial client participation is vulnerable to Byzantine attacks, and existing robust aggregation methods—designed for full participation—fail catastrophically, especially when malicious clients constitute a majority within the sampled subset.
Method: We propose Delayed Momentum Aggregation (DMA), a novel server-side mechanism that jointly leverages fresh momentum from active clients and delayed gradients from inactive clients to perform robust model updates. DMA is the first to incorporate momentum information into Byzantine-robust aggregation under sparse communication. Embedded within the momentum gradient descent framework, it integrates with robust aggregation rules to withstand diverse malicious gradient attacks under stochastic client sampling.
Contribution/Results: We theoretically establish that DMA recovers the convergence rate of full-participation baselines and achieves the optimal lower bound for locally participating settings. Extensive experiments demonstrate its stable convergence and superior performance across multiple Byzantine attack scenarios.
📝 Abstract
Federated Learning (FL) allows distributed model training across multiple clients while preserving data privacy, but it remains vulnerable to Byzantine clients that exhibit malicious behavior. While existing Byzantine-robust FL methods provide strong convergence guarantees (e.g., to a stationary point in expectation) under Byzantine attacks, they typically assume full client participation, which is unrealistic due to communication constraints and client availability. Under partial participation, existing methods fail immediately after the sampled clients contain a Byzantine majority, creating a fundamental challenge for sparse communication. First, we introduce delayed momentum aggregation, a novel principle where the server aggregates the most recently received gradients from non-participating clients alongside fresh momentum from active clients. Our optimizer D-Byz-SGDM (Delayed Byzantine-robust SGD with Momentum) implements this delayed momentum aggregation principle for Byzantine-robust FL with partial participation. Then, we establish convergence guarantees that recover previous full participation results and match the fundamental lower bounds we prove for the partial participation setting. Experiments on deep learning tasks validated our theoretical findings, showing stable and robust training under various Byzantine attacks.