FedSUM Family: Efficient Federated Learning Methods under Arbitrary Client Participation

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing federated learning methods often rely on restrictive assumptions—such as fixed client participation patterns or homogeneous data distributions—rendering them fragile under highly dynamic, arbitrary client availability. Method: This paper proposes FedSUM, a general family of algorithms that requires no assumptions on data distribution. It models participation variability via two complementary metrics—maximum delay τ_max and average delay τ_avg—and introduces three variants: FedSUM-B, FedSUM, and FedSUM-CR. These integrate delay-aware aggregation, decentralized gradient correction, communication compression, and asynchronous updates. Contribution/Results: We provide rigorous convergence analysis showing that FedSUM achieves an O(1/√T) rate under arbitrary client participation and non-IID data—without requiring statistical or system-level homogeneity. Empirical evaluation demonstrates superior robustness and broader applicability in realistic deployment scenarios, significantly improving practical feasibility over prior approaches.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) methods are often designed for specific client participation patterns, limiting their applicability in practical deployments. We introduce the FedSUM family of algorithms, which supports arbitrary client participation without additional assumptions on data heterogeneity. Our framework models participation variability with two delay metrics, the maximum delay $τ_{max}$ and the average delay $τ_{ ext{avg}}$. The FedSUM family comprises three variants: FedSUM-B (basic version), FedSUM (standard version), and FedSUM-CR (communication-reduced version). We provide unified convergence guarantees demonstrating the effectiveness of our approach across diverse participation patterns, thereby broadening the applicability of FL in real-world scenarios.
Problem

Research questions and friction points this paper is trying to address.

Efficient federated learning under arbitrary client participation
Modeling participation variability with delay metrics
Providing unified convergence guarantees for diverse patterns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Supports arbitrary client participation patterns
Models variability with maximum and average delay metrics
Provides unified convergence guarantees across patterns
🔎 Similar Papers
No similar papers found.
R
Runze You
School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen)
Shi Pu
Shi Pu
贵州电信 China Telecom Guizhou Branch
Computer vision