Federated Multi-Objective Learning with Controlled Pareto Frontiers

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing federated multi-objective learning methods (e.g., FMOL) yield only task-level Pareto stationary points, failing to ensure inter-client fairness and often neglecting minority clients. Method: We propose a controllable Pareto frontier framework for federated multi-objective optimization, introducing preference cone constraints and cone regularization—first such incorporation in federated multi-objective learning—to achieve client-level Pareto optimality. Our approach integrates Federated Multi-Gradient Descent Averaging (FMGDA) and its stochastic variant (FSMGDA) to solve cone-constrained Pareto multi-task learning subproblems at the server. Contribution/Results: Experiments on non-IID data demonstrate significant improvements in client fairness without sacrificing model accuracy: upon convergence, test accuracy matches that of FedAvg, while achieving superior fairness across heterogeneous clients. The framework enables flexible trade-off control via interpretable cone parameters, enhancing both equity and practical deployability in real-world federated settings.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) is a widely adopted paradigm for privacy-preserving model training, but FedAvg optimise for the majority while under-serving minority clients. Existing methods such as federated multi-objective learning (FMOL) attempts to import multi-objective optimisation (MOO) into FL. However, it merely delivers task-wise Pareto-stationary points, leaving client fairness to chance. In this paper, we introduce Conically-Regularised FMOL (CR-FMOL), the first federated MOO framework that enforces client-wise Pareto optimality through a novel preference-cone constraint. After local federated multi-gradient descent averaging (FMGDA) / federated stochastic multi-gradient descent averaging (FSMGDA) steps, each client transmits its aggregated task-loss vector as an implicit preference; the server then solves a cone-constrained Pareto-MTL sub-problem centred at the uniform vector, producing a descent direction that is Pareto-stationary for every client within its cone. Experiments on non-IID benchmarks show that CR-FMOL enhances client fairness, and although the early-stage performance is slightly inferior to FedAvg, it is expected to achieve comparable accuracy given sufficient training rounds.
Problem

Research questions and friction points this paper is trying to address.

Ensures client-wise Pareto optimality in federated learning
Improves fairness for minority clients in FL
Balances multi-objective optimization with privacy preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conically-Regularised FMOL enforces client-wise Pareto optimality
Uses preference-cone constraint for fair multi-objective optimization
Combines FMGDA/FSMGDA with cone-constrained Pareto-MTL sub-problem
🔎 Similar Papers
No similar papers found.
J
Jiansheng Rao
Sun Yat-sen University
J
Jiayi Li
University of Electronic Science and Technology of China
Z
Zhizhi Gong
Shandong University
Soummya Kar
Soummya Kar
Electrical and Computer Engineering, Carnegie Mellon University
Large Scale Stochastic Systems
H
Haoxuan Li
Peking University