FedHiP: Heterogeneity-Invariant Personalized Federated Learning Through Closed-Form Solutions

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the convergence difficulties and performance degradation of personalized federated learning (PFL) under non-independent and identically distributed (Non-IID) data, this paper proposes FedHiP—a novel framework that for the first time replaces gradient-based updates with an analytical closed-form solution, enabling heterogeneity-invariant modeling. FedHiP adopts a three-stage optimization paradigm: local training, global aggregation, and personalized fine-tuning—achieved by freezing a self-supervised pre-trained feature extractor while employing an analytically solvable classifier. Theoretically, FedHiP guarantees that each client’s model remains unaffected by others’ data distributions, thus simultaneously achieving collective generalization and individual adaptation. Extensive experiments on multiple benchmark datasets demonstrate that FedHiP outperforms state-of-the-art methods by 5.79%–20.97% in accuracy, significantly enhancing both performance and stability in highly heterogeneous PFL settings.

Technology Category

Application Category

📝 Abstract
Lately, Personalized Federated Learning (PFL) has emerged as a prevalent paradigm to deliver personalized models by collaboratively training while simultaneously adapting to each client's local applications. Existing PFL methods typically face a significant challenge due to the ubiquitous data heterogeneity (i.e., non-IID data) across clients, which severely hinders convergence and degrades performance. We identify that the root issue lies in the long-standing reliance on gradient-based updates, which are inherently sensitive to non-IID data. To fundamentally address this issue and bridge the research gap, in this paper, we propose a Heterogeneity-invariant Personalized Federated learning scheme, named FedHiP, through analytical (i.e., closed-form) solutions to avoid gradient-based updates. Specifically, we exploit the trend of self-supervised pre-training, leveraging a foundation model as a frozen backbone for gradient-free feature extraction. Following the feature extractor, we further develop an analytic classifier for gradient-free training. To support both collective generalization and individual personalization, our FedHiP scheme incorporates three phases: analytic local training, analytic global aggregation, and analytic local personalization. The closed-form solutions of our FedHiP scheme enable its ideal property of heterogeneity invariance, meaning that each personalized model remains identical regardless of how non-IID the data are distributed across all other clients. Extensive experiments on benchmark datasets validate the superiority of our FedHiP scheme, outperforming the state-of-the-art baselines by at least 5.79%-20.97% in accuracy.
Problem

Research questions and friction points this paper is trying to address.

Addresses data heterogeneity in federated learning
Replaces gradient updates with closed-form solutions
Ensures consistent performance across non-IID data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Closed-form solutions replace gradient-based updates
Self-supervised pre-training with frozen backbone
Three-phase analytic training for personalization
🔎 Similar Papers
No similar papers found.