🤖 AI Summary
Existing federated multi-objective learning methods (e.g., FMOL) yield only task-level Pareto stationary points, failing to ensure inter-client fairness and often neglecting minority clients.
Method: We propose a controllable Pareto frontier framework for federated multi-objective optimization, introducing preference cone constraints and cone regularization—first such incorporation in federated multi-objective learning—to achieve client-level Pareto optimality. Our approach integrates Federated Multi-Gradient Descent Averaging (FMGDA) and its stochastic variant (FSMGDA) to solve cone-constrained Pareto multi-task learning subproblems at the server.
Contribution/Results: Experiments on non-IID data demonstrate significant improvements in client fairness without sacrificing model accuracy: upon convergence, test accuracy matches that of FedAvg, while achieving superior fairness across heterogeneous clients. The framework enables flexible trade-off control via interpretable cone parameters, enhancing both equity and practical deployability in real-world federated settings.
📝 Abstract
Federated learning (FL) is a widely adopted paradigm for privacy-preserving model training, but FedAvg optimise for the majority while under-serving minority clients. Existing methods such as federated multi-objective learning (FMOL) attempts to import multi-objective optimisation (MOO) into FL. However, it merely delivers task-wise Pareto-stationary points, leaving client fairness to chance. In this paper, we introduce Conically-Regularised FMOL (CR-FMOL), the first federated MOO framework that enforces client-wise Pareto optimality through a novel preference-cone constraint. After local federated multi-gradient descent averaging (FMGDA) / federated stochastic multi-gradient descent averaging (FSMGDA) steps, each client transmits its aggregated task-loss vector as an implicit preference; the server then solves a cone-constrained Pareto-MTL sub-problem centred at the uniform vector, producing a descent direction that is Pareto-stationary for every client within its cone. Experiments on non-IID benchmarks show that CR-FMOL enhances client fairness, and although the early-stage performance is slightly inferior to FedAvg, it is expected to achieve comparable accuracy given sufficient training rounds.