On Performance Guarantees for Federated Learning with Personalized Constraints

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in federated learning where participating clients face heterogeneous personalized constraints—such as resource or model limitations—that lead to inconsistent feasible sets. To tackle this, the authors propose PC-FedAvg, a novel algorithm employing multi-block local decision variables and a cross-estimation mechanism. It enables personalized optimization by penalizing only the infeasibility of each client’s own block, without requiring the exchange of constraint information or global consensus on feasibility. As the first federated learning framework to support personalized constraints under such minimal coordination assumptions, the method integrates penalty functions with distributed optimization. Theoretically, it achieves a communication complexity of 𝒪(ε⁻²) to reach ε-suboptimality and an individual infeasibility complexity of 𝒪(ε⁻¹). Empirical validation on MNIST and CIFAR-10 demonstrates its effectiveness.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) has emerged as a communication-efficient algorithmic framework for distributed learning across multiple agents. While standard FL formulations capture unconstrained or globally constrained problems, many practical settings involve heterogeneous resource or model constraints, leading to optimization problems with agent-specific feasible sets. Here, we study a personalized constrained federated optimization problem in which each agent is associated with a convex local objective and a private constraint set. We propose PC-FedAvg, a method in which each agent maintains cross-estimates of the other agents' variables through a multi-block local decision vector. Each agent updates all blocks locally, penalizing infeasibility only in its own block. Moreover, the cross-estimate mechanism enables personalization without requiring consensus or sharing constraint information among agents. We establish communication-complexity rates of $\mathcal{O}(ε^{-2})$ for suboptimality and $\mathcal{O}(ε^{-1})$ for agent-wise infeasibility. Preliminary experiments on the MNIST and CIFAR-10 datasets validate our theoretical findings.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Personalized Constraints
Heterogeneous Constraints
Distributed Optimization
Agent-specific Feasible Sets
Innovation

Methods, ideas, or system contributions that make the work stand out.

personalized constrained federated learning
PC-FedAvg
cross-estimate mechanism
agent-specific constraints
communication complexity
🔎 Similar Papers
No similar papers found.