🤖 AI Summary
In federated learning, existing supernet-based methods neglect the heterogeneity of local Pareto fronts across clients and their deviation from the global Pareto front, hindering simultaneous optimization of personalized performance and fairness. This paper proposes HetPFL, the first framework to explicitly model local Pareto front heterogeneity. It introduces two core mechanisms: Adaptive Preference Sampling (PSA) and Preference-Aware Hypernetwork Fusion (PHF), jointly optimizing client-specific adaptation and global generalization. Under milder assumptions than prior work, we establish linear convergence of the algorithm. Extensive experiments on four benchmark datasets demonstrate that HetPFL consistently outperforms seven state-of-the-art baselines, achieving significant improvements across Pareto front quality metrics—including Hypervolume (HV) and Inverted Generational Distance (IGD)—thereby effectively mitigating the heterogeneity-induced trade-off between performance and fairness in federated multi-objective optimization.
📝 Abstract
Recent methods leverage a hypernet to handle the performance-fairness trade-offs in federated learning. This hypernet maps the clients' preferences between model performance and fairness to preference-specifc models on the trade-off curve, known as local Pareto front. However, existing methods typically adopt a uniform preference sampling distribution to train the hypernet across clients, neglecting the inherent heterogeneity of their local Pareto fronts. Meanwhile, from the perspective of generalization, they do not consider the gap between local and global Pareto fronts on the global dataset. To address these limitations, we propose HetPFL to effectively learn both local and global Pareto fronts. HetPFL comprises Preference Sampling Adaptation (PSA) and Preference-aware Hypernet Fusion (PHF). PSA adaptively determines the optimal preference sampling distribution for each client to accommodate heterogeneous local Pareto fronts. While PHF performs preference-aware fusion of clients' hypernets to ensure the performance of the global Pareto front. We prove that HetPFL converges linearly with respect to the number of rounds, under weaker assumptions than existing methods. Extensive experiments on four datasets show that HetPFL significantly outperforms seven baselines in terms of the quality of learned local and global Pareto fronts.