🤖 AI Summary
This work addresses four major challenges in federated learning—functional constraints, communication compression, multi-step local updates, and partial client participation—by proposing a unified optimization framework that avoids projections or dual variables. The method employs a switching gradient mechanism combined with bidirectional error feedback to mitigate compression-induced noise and introduces a soft switching strategy to stabilize updates near the boundary of the feasible region. To the best of our knowledge, this is the first approach to jointly handle all four challenges within a single framework. Theoretically, the algorithm achieves a convergence rate of $O(1/\sqrt{T})$ with high probability. Empirical evaluations on Neyman-Pearson classification and constrained Markov decision process tasks demonstrate its effectiveness and practicality.
📝 Abstract
We introduce FedSGM, a unified framework for federated constrained optimization that addresses four major challenges in federated learning (FL): functional constraints, communication bottlenecks, local updates, and partial client participation. Building on the switching gradient method, FedSGM provides projection-free, primal-only updates, avoiding expensive dual-variable tuning or inner solvers. To handle communication limits, FedSGM incorporates bi-directional error feedback, correcting the bias introduced by compression while explicitly understanding the interaction between compression noise and multi-step local updates. We derive convergence guarantees showing that the averaged iterate achieves the canonical $\boldsymbol{\mathcal{O}}(1/\sqrt{T})$ rate, with additional high-probability bounds that decouple optimization progress from sampling noise due to partial participation. Additionally, we introduce a soft switching version of FedSGM to stabilize updates near the feasibility boundary. To our knowledge, FedSGM is the first framework to unify functional constraints, compression, multiple local updates, and partial client participation, establishing a theoretically grounded foundation for constrained federated learning. Finally, we validate the theoretical guarantees of FedSGM via experimentation on Neyman-Pearson classification and constrained Markov decision process (CMDP) tasks.