Delayed Momentum Aggregation: Communication-efficient Byzantine-robust Federated Learning with Partial Participation

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning under partial client participation is vulnerable to Byzantine attacks, and existing robust aggregation methods—designed for full participation—fail catastrophically, especially when malicious clients constitute a majority within the sampled subset. Method: We propose Delayed Momentum Aggregation (DMA), a novel server-side mechanism that jointly leverages fresh momentum from active clients and delayed gradients from inactive clients to perform robust model updates. DMA is the first to incorporate momentum information into Byzantine-robust aggregation under sparse communication. Embedded within the momentum gradient descent framework, it integrates with robust aggregation rules to withstand diverse malicious gradient attacks under stochastic client sampling. Contribution/Results: We theoretically establish that DMA recovers the convergence rate of full-participation baselines and achieves the optimal lower bound for locally participating settings. Extensive experiments demonstrate its stable convergence and superior performance across multiple Byzantine attack scenarios.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) allows distributed model training across multiple clients while preserving data privacy, but it remains vulnerable to Byzantine clients that exhibit malicious behavior. While existing Byzantine-robust FL methods provide strong convergence guarantees (e.g., to a stationary point in expectation) under Byzantine attacks, they typically assume full client participation, which is unrealistic due to communication constraints and client availability. Under partial participation, existing methods fail immediately after the sampled clients contain a Byzantine majority, creating a fundamental challenge for sparse communication. First, we introduce delayed momentum aggregation, a novel principle where the server aggregates the most recently received gradients from non-participating clients alongside fresh momentum from active clients. Our optimizer D-Byz-SGDM (Delayed Byzantine-robust SGD with Momentum) implements this delayed momentum aggregation principle for Byzantine-robust FL with partial participation. Then, we establish convergence guarantees that recover previous full participation results and match the fundamental lower bounds we prove for the partial participation setting. Experiments on deep learning tasks validated our theoretical findings, showing stable and robust training under various Byzantine attacks.
Problem

Research questions and friction points this paper is trying to address.

Byzantine-robust federated learning under partial participation
Addressing vulnerability when sampled clients contain Byzantine majority
Ensuring convergence guarantees with sparse client communication
Innovation

Methods, ideas, or system contributions that make the work stand out.

Delayed momentum aggregation for Byzantine robustness
Partial participation with delayed gradient integration
Convergence guarantees matching fundamental lower bounds
🔎 Similar Papers
No similar papers found.